[discuss] Criteria for Internet Governance (was) Re: List announcement "robust governance in the digital age"

Jefsey jefsey at jefsey.com
Tue Feb 11 19:03:02 UTC 2014


At 11:47 11/02/2014, Michel Gauthier wrote:
>I understand. However my starting point is that the very 
>architectonic concept of ICANN is an anti-MS proposition. Changing 
>ICANN before I am independently convinced by architectonic arguments 
>that I am wrong, will not help.
>
>Look we face real life propositions:
>
>1. ICANN which does not respect its own published currently enforced 
>policy (ICP-3) and tries to trap us with MS (where) as a bait.
>2. IUCG's HomeRoot proposed experimentation (along this ICANN ICP-3 
>policy) which wants to fully take advantage from RFCs to allow each 
>stakeholder to only trust his/her own root and to protect most of 
>his/her DNS metadata as several people I know did for years.
>
>I realize that these two propositions lead to totally different IG 
>concepts. So, I am interested, in addition, as a complementary 
>un-biased decision element, to know what should be the criteria for 
>MS-IG success in each case.

Michel,

some architectonic consideration is necessary at this stage.

Willing it or not, we belong to the universe, which is a chaotic 
hardware, software, and brainware history between mathematic 
continuity and quantum discontinuity, as the two faces of the 
semantic tale that we are writing together. What we are trying to do 
on this list is to put some concerted auto-stabilizing management in 
a core part of it (its digisphere's communication system).

As for everything that we cannot do, think, build, or decide on alone 
we need to discover and understand the internal rules of the 
universal fractally deterministic chaos that we have to use in order 
to get the emergence of a sustainable MS system out of a 
self-organized criticality architectonical process. This is because 
complexity works that way and the internet is a "very large system" 
subject to the Principle of Simplicity (cf. RFC 3439). We need to 
reduce its complicacy to complexity before it blocks (probably by distrust).

To understand this, one has to accept that networking increases the 
number of influences to the point that one cannot perceive the entire 
picture. Our brain just does not scale. Therefore, we cannot 
determine if the system is complicated (not optimal) or complex 
(optimal). What RFC 3439 implies is that the only solution we have to 
best design the network is to keep things simple (optimal) at the 
scales and locations we master. However, this kind of reductionism 
only partly permits a small scope synergy that results in local 
negentropy (why the whole is more than the sum of its parts, the 
universe formula: why it has not yet dissolved in entropy). This is 
another way to express the entropic noise of Shannon.

What Self Organized Criticality means is that we may not be able to 
avoid a "catastrophe". After a catastrophe (i.e. the singularity of 
an optimization conflict [double/multiple constraint] of any scale, 
in any area) a new optimization (equilibrium) is going to emerge from 
the new initial conditions (a new chain of microstates) resulting 
from the post catastrophic remains. A long-term, or well-designed 
system is going to use and survive its minor criticalities and will 
pursue its project as it was "attracted" by its own deeper nature 
(teleonomy) or purpose (teleology).

The stability of an "n-body" system (the easiest image is the sun and 
the planets), therefore, depends on the stability of its attractor. 
If its attraction is disturbed by another attractor, one understands 
that the whole system may become unstable. This is what is happening 
now with the Internet, IANA, NSA and ICANN. The main internet 
attractor should be the IANA, but it is disturbed by the economic 
size and political weight of the ICANN's own centrality and the 
disturbance of the NSA meteor.

One will not blow-up ICANN, because the same shot would kill the 
IANA. The idea is therefore to disseminate and enlarge the IANA 
attraction in order to increase stability and reduce centrality (what 
is the true nature of a distributed system). The target is to create 
a new equilibrium without hurting ICANN. ICANN's own complexity will 
then adapt, hopefully without criticality, if they are smooth enough; 
new, complementary ICANNs will most probably appear in other 
namespaces for new technologies through a natural MS mechanism. Three 
moves are currently going into that direction:

1. The OpenStand standardization smoothing by IEEE, IETF, W3C, IAB 
and ISOC that has to be extended through the netix concept (one 
single operating language for processes and protocols).

2. The "HomeRoot" experimentation (as per ICP-3) for an individual 
sure and secure network centrality.

3. The need to continue the HomeRoot experimentation through a full 
implementation and development of the Layer6/IUI to answer the 
multitechnology, multilinguistic, etc. motivation of the catenet 
internetting proof of concept.

Humanity has created some "universal" systems like languages, ethics, 
money, standardization,.... The internet is the first physical 
artificial universal. We are learning. In the universal fractally 
deterministic chaos, "fractally" means that the same rules apply 
everywhere at every scale, and "deterministic" means that there are 
actual rules. The rules are the links (syllodata) among elements 
(linked data) of the chaos, and their knowledge is the intellition 
(information on chaos' internal intelligence) that permits to 
establish and maintain metadata and communications.

This is why ONS, NDN, IDNS are among the key/hot topics. Will we use 
IPv6 or LISP? How will be set-up the new ICANNs?

Then we will have to patch, fix and change the NSA friendly TCP/IP 
technology (Orange is now suing the NSA :-) - may be ICANN could 
propose an adapted UDRP and Brazil introduce an Sao Paulo 
International Court of Cyber Justice?).

jfc




More information about the discuss mailing list