[discuss] Boundaries and sovereignty

JFC Morfin jefsey at jefsey.com
Mon Feb 3 15:23:57 UTC 2014


At 19:13 02/02/2014, John Curran wrote:
>On Feb 2, 2014, at 9:11 AM, JFC Morfin <jefsey at jefsey.com> wrote:
>
> > ...
> > In order to correctly document and keep the top zone clean, we 
> would need a permanent survey of the top zone consistency and 
> reality, in calling each name server and asking it for the list of 
> the TLDs of which they are currently keeping track of in their 
> buffers and the associated name servers.
>
>JFC -
>
>Interesting concept...  thank you for expressing it clearly above.

John,
You raise different issues.

A. I am afraid I cannot answer your question. Because there is none. 
Let me explain you why.

1. In my post, I first explain the netix general concept as the 
strict completion and convergence of the Tymnet/OSI/Internet/POSIX 
projects. "My" model is therefore the strict respect of everything 
existing, welded by some *additional* commands at the user place 
(when they are not already yet fully supported by some private 
environement). So, nothing to do with the Internet or the name space, 
just about their more intelligent use. Not a single bit being 
changed, not a single coma in RFC modified. Just clarifying 
technical/political/commercial confusions or limitations.

2. Then, I answer the question concerning the name space. It was 
asked, as far as I understand, due to the netix probable 
simplification for the users in best managing and securing their DNS 
resolution process, for reasons partly given Hindenburgo Pires.

This enlights that the way the DNS is conceived, developped and deployed is:
- adequate to the MS/globalization approach the I*coalition whishes 
for ICANN and IANA.
- incorrectly supported by the ICANN current model which confuses 
collecting+documenting with regulating+selling.

If you do today what I document in the two lines you quote you will 
see that the ICANN dreamed top zone is quite different from the 
really active one. This explains the ICANN attitude: they deny 
reality to give it the least publicity and not encourage the top zone 
... diversity (they prefer to sell).


B. Now let me come back to what you seem to call "my model"n which is 
the today ICANN sponsored situation. ICANN says that reality is 
"illegal". Since it is not the technical case and it cannot 
technically block it, it hopes that selling expensive vanity TLDs 
will push the investors to lobby for the DNS to be made its 
international legal monopoly, with Homeland and FBI 
support 
https://www.ice.gov/news/releases/1312/131202washingtondc.htm. This 
is continuity with its TM owners, IPRs, etc. oriented strategy, i.e. 
an invasion of private use by commercial constraints.

No need to say that many people resent to be under the direct reach 
of the US *Homeland* decisions. This is why there are some 
divergences regarding the meaning of "globalization" also in the 
context of the American Cyber Competitiveness Act 
http://searchcompliance.techtarget.com/guides/FAQ-What-is-the-current-status-of-US-cybersecurity-legislation

The network of ICANN is not the internet of Vint Cerf: it is the 
contractual net weaved by Joe Sims. The ICANN globalization that the 
world waits for is not so much about US centered technical 
management. It is about restoring legal soverignties and multilateral 
legal agreements if they were necessary.

Let's take an example. The WhoIs is a violation of the privacy laws 
in many countries. The best is therefore to use non-ICANN TLDs which 
do not use that old Postel tool and respect national laws (if you 
want to contact the authority behind a domain name you send it a mail 
at the address listed by its nslookup).


C. Now, I understand your surprise at the way the DNS is actually 
designed as a distributed system one can report about, and not a 
centralized hierarchy one can order.

The root server system is only a service to those missing the 
processing capacity to resolve the most common names, to the NSA to 
collect metadata, and to designers to have a metric of the internet 
use and development. As you know more than 90 % of the calls to the 
root server system are errors. Dumb systems keeping asking the same 
dumb questions about a file everyone has and freely extend. As Vint 
Cerf puts it: the internet root file is the most used one. There is 
none, because there are plenty of them.


D. The only technical question os about propagation and buffer 
pollution. There is no possible buffer polution since there is no propagation.

Propagation comes from the use of the root server system and 
recursion. If you do not use them you do not use/pollute any buffer 
(anyway AJAX often implies very short TTLs). This is why the DNS 
model is a good and robust one, that one has to better know and fully 
use. It has several ways of being used.

The only possible pollution is structural and results from the 
classless CNAME/DNAME wording/understanding in RFCs. I suspect that 
practicall testing will tell a lot, and that a few "legal" or "bold" 
decisions (le.g. the ".su" position regarding IDNA registrations or 
the Chinese TLDs and keywords) will make this to be addressed one way 
or another. There is a need to understand first the true nature of 
CNAME/DNAME in several areas (technical, legale, IP, operations, etc.)

>In such a model, can I understand what you propose would happen when 
>two name servers
>both have TLD's of the same name but different content?

This is non possible, so it would be a configuration bug. The root 
you use decides of the TLD name servers you call. There is no recursion.

>I can easily see how content could
>be aggregated when non-overlapping TLDs appear (e.g. the appearance 
>of ".jcurranslongishTLD"
>would not likely to appear in any other name server other than the 
>one I inserted it into, and hence
>it's propagation would therefore be relatively safe), but the 
>appearance of a second ".com" whose
>subdomains had conflicts or regional TLD and commercial TLD of the 
>same name could be very
>problematic.

This is the case when people can access several alternative root 
server systems as it is the case to day, if used root does not 
cooperate like ORSN, i.e. synchronized with the NTIA, and if their 
ISP is providing recursion. There is no technical problem otherwise. 
If you decide to use a kid-protection oriented ".com" in your root 
there is no problem.

However, some people may wish to have different visions of the name 
space supported/accessed (for example on a linguistic basis). Then 
classes bring the solution.

>Do you propose inconsistent resolution based on the user's resolver 
>configuration
>of region/locale/provider, or blocking of all conflicting TLDs till 
>manual resolution, or some other
>mechanism?

I propose nothing. I just confirm that there are ways to more 
intelligently use the internet resources in order to achieve more for 
the same money, program, etc. and less hassle. And to respect the 
constitution of my own country regarding the cyber part of the 
citizens environment. For a simple reason: this is what they have 
been designed for - to make the internet work better (something 
OpenStand tends to translate as "sell better").

Now, the question is to know if there is no flaw in RFCs or in the 
way developers, technical users and end-users read them.

This is why this has to be tested. This is the purpose of the 
"HomeRoot" project. I suppose that most will use it in copyng the 
"crown-root". Then adding some black list and local TLD for 
simplified use. Then we will see how the grassroots-files 
differentiate for which result. And how the use compares with the 
root-server-system, and how the different root data publishers may 
decide to cooperate.

This is true MS-IG and open globalization.

jfc







More information about the discuss mailing list