As of May 6, the first native language country code top level domains (ccTLD) were released into the wild by ICANN. The first three released are for the Emirates, Egypt and Saudi.
For security reasons which make a fair amount of sense, the ability for a browser to display non-Latin characters must be specifically enabled. The ability to include non-latin characters in a URL has been available since [date], but the recent work by ICANN to introduce standardized TLDs is another step towards opening information accesibility. The TLD transition establishes the opportunity for true localizatin, so that a Cairene will not necessarily need to recognize a set of latin characters to browse the web.
That said, I think the real significance of the transition relates more to convenience in the immediacy and a progressive consideration of future internet usage. Currently, the bulk of internet users do tend to have familiarity with Latin characters at some level. Much of that comes from the course of computing development and the reality that the bulk of computing resources and software have really only been moving in the direction of localization in the past 10 years. One example of a project in line with that aim is Arabeyes which has been working to translate various software and resouces into Arabic. The xenocentric dominion of computing marketing is beginning to wane as the asiatic powers rise to the forefront of development and market share.
The risk comes from url spoofing, wherein someone could replace a Latin character with a non-Latin character that appears the same. A good example from the initial release group is the similarity between the cyrillic character [cyrillic] and the latin character \”a\”. With Cyrillic support enabled, [address] and [address] would appear indistinguishable, despite the registered domains being [address-ext] and [address-ext], respectively. The underlying translation from the representative character string requires a couple quick changes to the client browser.