- Workgroup:
- The Interpeer Project
- Published:
Problem Statement & Use Cases for Human-Centric Networking
Abstract
This documents describes issues with the current Internet's design with regards to human centric use cases. It examines existing network architecutres and their respective disadvantages in meeting the requirements of those use cases. The document is intended to serve as a problem statement for a novel, human centric networking architecture designed to better serve human needs.¶
Its companion documents are [INTERPEER-REQUIREMENTS] and [INTERPEER-ARCHITECTURE].¶
About This Document
This document is a draft PIE and so adheres to the publishing process and naming described in [PIE.f92f09.00].¶
-
This version is published at PIE.f92f09.problem-statement¶
-
The latest version can be found at PIE.f92f09.problem-statement-00¶
Contributing¶
Responsibility for this document lies with The Interpeer Project.¶
Source code for it can be found at https://codeberg.org/interpeer/draft-jfinkhaeuser-interpeer.¶
Additional coordination and discussion occurs on a mailing list:¶
-
Address: interpeer@lists.interpeer.org¶
1. Introduction
This document describes issues with the current Internet's architecture and design. The focal point for this examination are the ways in which the current architecture fails to serve human needs, or fails to serve them well. This is addressed in Section 2.¶
Section 3 provides use cases for the existing and emerging Internet as a guideline for what a forward-looking architecture needs to support.¶
This document is referred to as [INTERPEER-PROBLEM-STATEMENT] in its companion documents:¶
A gap analysis framework, as well as an analysis of alternate network architectures is provided in [INTERPEER-REQUIREMENTS]. This document also lists the properties desirable in an architecture in order to solve the discussed issues; they can be considered requirements.¶
An architecture aimed at fulfilling these requirements is presented in [INTERPEER-ARCHITECTURE].¶
2. Problem Statement
Technology can never fix problems of society. At the same time, technological and societal change tend to occur hand-in-hand in history: either the solutions to pressures in society require technological advancement. Or the invention of some technology enables a societal change, the need for which may only be fully understood later.¶
The currently primary Internet-enabled technology -- the World Wide Web (WWW) -- so contains problems that require solving. The issues discussed here are fundamentally societal in nature, or they are acerbated by current social pressures.¶
It is easy to dismiss such non-technical issues in a technical forum; one should only focus on the hard facts. One such hard fact is that technology is created by and for humans. Another hard fact is that humans are not infallible. The fields of psychiatry and psychology deal with common failure modes of humans, and assign names for commonly identified behavior patterns.¶
"Cognitive inertia" [COGNITIVE-INERTIA] describes the difficulty in changing mental direction, analogous to how inertia in physics is the idea that objects keep moving in the same direction and at the same speed until something compels them to change. Similarly, "decision fatigue" [DECISION-FATIGUE] describes the effect that a rested mind finds it easier to assess facts and perform such changes in direction -- making decisions --, while being faced with decision after decision fatigues the mind. Having to come to a decision is akin to a kind of mental crisis that requires resolution for well-being. So when fatigue sets in, the mind may prioritize quick resolution over the most desirable long-term effects.¶
It is therefore well understood that, as a matter of probability, humans tend to follow the path of least resistance. The implication for engineers is that "a system is what a system does". That is, when acting within the constraints of a technical system, humans are prone to making certain decisions; it follows that the system's architectural constraints "induce" this behavior.¶
While this section started with stating that technology can never fix problems of society, the above strongly suggests that it can contribute to these problems. It is no stretch to imagine, then, that it can also contribute to the avoidance of the self-same problems by adjusting the system's architectural constraints.¶
The issues listed in the following sections are concrete examples of the impact properties of Internet technology have on the physical world and human beings living within it. They all relate to human rights in some fashion.¶
A more complete list of the impact of protocol design decisions is maintained by the Human Rights Protocol Considerations within IRTF. In particular, documents such as [RFC9620] provide practical considerations for protocol design.¶
[INTERPEER-REQUIREMENTS] will refer to this document in more detail.¶
2.1. Issues
The issues in this section may be societal in nature, but they also provide the context for technological issues outlined in [INTERPEER-REQUIREMENTS]. In particular, they provide a lens through which to view the relationship between architectural constraints of a system, the properties it induces, and a focus for evaluating whether those properties are desirable.¶
2.1.1. Surveillance Capitalism
The term is popularized in the 2019 book "The Age of Surveillance Capitalism" [AOSC], and is defined on Wikipedia as follows:¶
-
Surveillance capitalism is a concept in political economics which denotes the widespread collection and commodification of personal data by corporations. [SURVEILLANCE-CAPITALISM]¶
Surveillance capitalism can only thrive in a system where personal data is easily available. Additionally, "data" is relatively worthless in itself until it is linked to other data, such as linking a name to the purchase of some medication. Once the link is established, one can infer further information, such as that the person making the purchase likely suffers from a condition that the medication is commonly prescribed for. It is then feasible to exploit this link by e.g. targeting advertising or deals at audiences with the highest probability of a "conversion" (i.e. sale).¶
One cannot link data without collecting it, which means that data collection is the activity that fuels the establishment of more links, and thus the opening of more potential revenue streams.¶
Attempts at curbing surveillance capitalism through policy tend to focus on company size or monopoly position. However, as Tarnoff notes in "Internet for the People" [IFTP], it is competition that drives data collection. Specifically, it is the competitive motivation to discover new or better monetizable links between individual data points.¶
2.1.2. Information Warfare
According to Zuboff, data collection is not necessarily problematic in itself. The problems arise during usage: when personal profiles are used to target advertising to the recipient, they can also be used to target disinformation, and thus drive decision making processes. Either is monetizable, and the logic of capitalism dictates that such monetization avenues must be exploited.¶
Surveillance capitalism, in other words, enables power brokerage to become indistinguishable from demagogy. It is no longer necessary to target specific powerful politicians to influence them, when instead one can manipulate their electorate.¶
A plethora of political events in recent years demonstrate not just the possibility of this, but the practice as well. Psychographic profiling of Facebook users, allowed and continues to allow for voter manipulation at a massive scale. And yet, the FTC response to one such event "has been criticized as failing to adequately address the privacy and other harms emanating from Facebook’s release of approximately 87 million Facebook users' data, which was exploited without user authorization." [CAMBRIDGE-ANALYTICA]¶
Meanwhile, these tactics are further expanded upon in so-called "hybrid warfare" that bridges the battlefield and disinformation campaigns [HYBRID-WARFARE]. The basis, however, remains the same: access to user information that permits to identify demographics vulnerable to disinformation attacks.¶
2.1.3. Centralization
Where surveillance capitalism describes the market mechanics that lead to data collection en masse, it fails to describe the more technical effect this has: the Web becomes increasingly centralized.¶
Web centralization has multiple contributing factors. One of the more fundamental ones, however, is that it is significantly easier to collect more data and establish more impactful links, the more data is funnelled through a centralized service. Competitive advantages are gained when one provides this service.¶
This is essentially equivalent to any other factor contributing to the emergence of monopolies; the key differences are that:¶
-
Antitrust laws are more effective in other markets outside of electronic mass data collection, and¶
-
centralization of data collection can be sold more easily as a feature than a bug; so-called "platforms" ostensibly create new markets while enabling the platform owners to skim a significant share of the proceeds.¶
The "centralization" term, however, is hotly debated. For every argument that a system is centralized, counter-arguments can be raised. The somewhat surprising realization in this is that most of these arguments are either due to¶
-
An instance of the "layering problem", whereby participants in a debate view distinct layers of the system stack (e.g. the World-Wide Web vs. the Internet).¶
-
Participants have an unclear definition of how one should measure "centralization".¶
For the purposes of this document, we define the "centralization" as more or
less identical to the definition of "betweenness centrality" in network or graph
theory. "Betweenness centrality" examines communication flows between nodes in a
communication graph, from all nodes A via any number of intermediary nodes to all
nodes B. For every node C (that is not part of A or B), it counts the
number of communication flows it is part of along the path, i.e. it is "between";
higher counts mean higher centrality.¶
The measure is useful because it can be applied to any layer of a communications stack, and may yield different results at each layer. For example, an [OAUTH] provider may sit "between" an identity management system and many applications, even if at a lower network layer, traffic is routed to each of these parties along entirely non-overlapping paths.¶
Centralization as "betweenness centrality" illustrates the dangers of such centralized designs -- the highly centralized node becomes a single point of failure for a much larger system:¶
-
It can disrupt many services when the centralized node experiences disruption.¶
-
It is easier to gain access and maliciously manipulate or harvest data on one centralized node than in a widely distributed system.¶
The above describe failure modes of centralization. Proponents of centralization argue that mitigating against such failures also becomes easier with centralization. While true to an extent, similar effects can be achieved with processes and tooling, without introducing the risk of these specific failures.¶
2.1.4. Oppression & Genocide
The Internet is a great enabler for communities to come together; this is in particular the case for minority communities that have no other representation in popular media. The Uyghur of China are such a minority, whose usage of the Internet to keep their identity alive has been well documented [UYGHUR-INTERNET].¶
Equally well documented, unfortunately, is the "cultural genocide" of the Uyghur [UYGHUR-WAR]. The role of the Internet in this is just as central as it was in bringing the community together in the first place. Reports indicate that China is using artificial intelligence operating on personal profile data to identify Uyghur and to target them methodically [UYGHUR-AI].¶
Similarly, the Amnesty International report [AUTOMATED-APARTHEID] illustrates already in 2023 how Isreal's use of facial recognition "fragments, segregates and controls Palestinians" in the Occupied Palestinian Territories. A year later, the organization updates the situation specifically in Gaza, calling it "genocide" ([GAZA-GENOCIDE]) - a view informed by the ongoing and updated case 192 of the International Court of Justice against Israel [GAZA-GENOCIDE-ICJ].¶
From the perspective of Internet engineers, it is not necessary to understand the ins and outs of a specific political situation. What is required is the broad understanding that collecting large amounts of personally identifiable information (PII) in a central location enables devastating misuse. The consequence then must be to protect PII and avoid centralization to counteract this.¶
2.1.5. Machine Learning
The role of artificial intelligence in Section 2.1.4 is one use where access to PII can be abused. The article "Artificial Intelligence, Advertising, and Disinformation" [ML-DISINFORMATION] lays out the overlap between these technologies in more detail.¶
It is one thing where machine learning is used to impersonate a politician as part of a disinformation campaign. But images and videos of politicians are public goods, as such persons partially give up their right to privacy in being public figures.¶
Given access to PII, the exact same technology can be used in more personal cyber attack scenarios: for example, access to voice recordings can allow an attacker to use a cloned voice in an attack scenario [ML-VOICE].¶
The ramification is that with more PII available, ML-based attacks or attacks making use of ML techniques evolve to exploit this access.¶
2.1.6. State Control over Media & Censorship
Whereas previous examples focused to a large degree on availability of personally identifiable information which may be acerbated by centralization, centralization poses an additional risk all by itself: centralization aids censorship.¶
The CensoredPlanet project [CENSORED-PLANET] monitors censorship of the internet around the globe, using a variety of techniques. However, they all revolve around measuring access to parts of the internet, such as the parts serving a news site, or an entire country's network.¶
It's worth highlighting that access is not necessarily binary here. Rather than blocking access, a particular path may merely become slowed down to the point where Internet users prefer to turn to other resources instead.¶
In either case, censorship relies on identifying targets to censor. The more access to targets is funneled through centralized choke points, the easier it is to apply censorship in those locations.¶
In many cases, this can be as simple as government buying media outlets outright, a practice that has led to the establishment of the Media Development Investment Fund (MDIF) [MEDIA-INDEPENDENCE]. In this latter form, centralization affects media production in any given outlet than access to the results.¶
It is necessary to consider the entire pipeline from media creation to consumption, however. Arguably the role of media outlets has historically been to collect potential news, filter this for some notion of "quality" to bring a subset of this source material to production, and then to distribute it again.¶
In a fully digitized world, collection and distribution find equivalences in ingress and egress traffic, while the news production itself is a data processing task. In other word, the media outlet model is centralized largely because the data processing tasks would historically have been impossible to distribute. This centralization, however, is also the cause for its vulnerability to state control and censorship.¶
It is imaginable, then, that if data processing tasks are easy to distribute, and the Internet offers the infrastructure to do so, that media censorship would become a significantly harder endeavor.¶
2.1.7. Information Loss
Previous problems were for the most part focused on data collection and centralization of critical services. A different, yet technically related issue also plagues the Web, that of so-called "link rot".¶
The term describes the fact that when a web page links to some other page or resource on the web, the target may disappear at any time. For the most part, this is an unfortunate feature of decentralization - namely decentralization of editorial control, in this case. As the site hosting the target resource of a link may undergo redesign, and redesign often (unfortunately) goes hand in hand with reorganization of URLs, once valid URLs may become invalid. It follows that any link pointing at this URL equally becomes invalid.¶
The web is not the only system to link information. Scientific publications often reference each other. For purposes such as this, the Digital Object Identifier ([DOI]) scheme has been standardized. In order to remain relevant in hyperlinked systems, the DOI scheme also includes mechanisms for resolving a DOI to a URL that "owners" of the DOI can update. A specialized service by the DOI foundation resolves URLs with a DOI, and redirects to the current target URL. In this fashion, DOIs represent stable identifiers and can be used to create stable links on the web that resist bit rot.¶
However, as the example of [NIST.IR.8366] tragically showcases, nothing prevents the target website from keeping URLs intact, but alterting the content or redacting it entirely.¶
While all the reasons that make centralized control over information resources apply (see issues with e.g. mis- or disinformation above), lack of truly stable identifiers for information resources also make the building and maintenance of information networks significantly harder. Additionally, restricting access to once available information can be used to influence just as much as not granting access in the first place.¶
When considering the preservation of the total body of human knowledge, resilient to accidental or malicious loss, stable identifiers to information resources are a key ingredient that DOIs unfortunately fail to solve.¶
In addition to stability, identifiers should also be verifiable. That is, it should be possible to verify that a piece of information, a sequence of Bytes encoding some knowledge, is actually the sequence of Bytes that is meant to be indicated by an identifier. There are several technical means at our disposal to verify such things, but they require an information theory approach that URLs completely lack; URLs are not in themselves verifiable in this fashion.¶
Finally, if sequences of Bytes can be linked to identifiers, then the storage location of that sequence of Bytes on any particular server is no longer of paramount concern. Such sequences of Bytes can be infinitely copied and archived, and still be identifiable. In this way, verifiable identifiers become the engine of knowledge preservation that the current web lacks.¶
It should be mentioned that several attempts have been made to create stable identifiers on the web, but the efforts largely failed. The most noticeable remnant of these efforts is visible in the distinction between URIs and URLs, i.e. Uniform Resource Identifiers and Locators. The idea has been for a long time to introduce resolvers to the web that resolve from URIs to URLs, but a standard for such resolvers has not become commonplace.¶
2.1.8. Intermittency
Equally unrelated to problems of data and centralization, but equally linkable to such issues is the fact that neither the Internet nor Web truly deal well with intermittency, that is delay or disruption in communications.¶
The Internet is designed to be a synchronous communications network for computers. Its mechanics allow for short delays, and a fair few efforts are directed at making such delays as short as possible (e.g. [RFC9330] and related work).¶
The general approach to intermittency on both the Internet and Web is to introduce timeouts for queries, which then potentially result in renewed attempts to perform the query, or ultimately errors displayed to the user. While this is a valid approach for basic machine-to-machine communications, several use cases present rather more stringent requirements; these include e.g. command and control of vehicles in remote locations.¶
It's also worth highlighting that this approach to intermittency is actively exploited in censorship (see Section 2.1.6), that is, access to resources is deliberately slowed down to incur errors.¶
2.2. Additional Context
2.2.1. Internet vs. Web
In the above text, and the rest of this document, the term "Internet" and "Web" (referring to the World Wide Web, or WWW) are used more or less interchangeably.¶
It is clear to the author(s) that these are distinct technologies, and that from the Internet's point of view, the Web is but one of many application protocols.¶
At the same time, in practice the Web is used almost ubiquitously from the point of view of Internet users. This therefore raises the question why this has come to be?¶
The arguments for or against this state of things are too numerous to list here. To provide a simpler lens, consider the following statement: "The Internet connects machines. The Web connects people."¶
In fact, no parts of the Internet protocols are particularly concerned with people. Addressing happens on a per-machine basis -- ignoring for the moment the ability to address more abstract things such as multicast groups or more concrete things such as the link on the machine that is configured to respond to a particular address.¶
Web technology, on the other hand, is concerned with what it terms "resources", a malleable concept that can represent digital or physical "things" as well as oneself or other people's digital identities. Furthermore, through user based authentication methods, the Web firmly establishes itself as being concerned with bridging between the purely digital and the physical or hybrid worlds.¶
If this statement is true, then the prevalence of Web-based applications may simply be explained by the fact that humans are trying to solve human problems with technology, and this often involves having a notion of how a human may be represented in the digital realm.¶
Arguable, then, from this human perspective the distinction between the Internet and the Web is moot, even if the engineering perspective differs. This "end-user perspective" demands that future Internet evolution is free to adopt concepts from the Web, such as relating to e.g. resources and user identification.¶
Doing so may not only open avenues for evolution that the current stricter split of concerns keeps firmly closed. It also permits the Internet, rather than one of its applications, to become the substrate for transporting people's actual, real world concerns.¶
On the other hand, as Section 2.1.8 on intermittency highlights, this in no way implies that all characteristics of the Web's design should be adopted as they are in future Internet evolutions. In fact, the strong suggestion here is that neither the Internet nor Web can be the complete answer to current and foreseeable use cases for the Internet.¶
2.2.2. Generative Systems vs. Tethered Appliances
In "The Future of the Internet" [FUTURE-INTERNET], Zittrain describes what he calls "tethered appliances" and "generative systems". In this definition, a tethered appliance, like a kitchen appliance, fulfills a strictly limited set of functions, determined by the manufacturer. It may be "tethered" in the same way that telephone sets used to be distributed by Bell/AT&T, intrinsically linked to the purchase of a phone line.¶
Zittrain contrasts this to "generative systems" such as the Personal Computer (PC). Here, a semi-finished product with no particular purpose other than to provide compute resources was brought to market -- and flourished, and in so doing changed the world.¶
He argues that the Internet is such a generative system. When it came to be, few could envision the impact it would have on the world today. He identifies as the reason that, having no specific purpose, the Internet was open to be used for whichever purpose its users desired.¶
Generative systems not only offer incredible flexibility, and allow for providing solutions to age-old problems. Human ingenuity will also be able to monetize such solutions. Zittrain observes that the businesses that profit most from the generative nature of a system eventually reach a point where innovation no longer matters; instead, the logic of capitalism dictates that efforts now need to be expended to shut out competition. In effect, it is in the same businesses' interest now to turn the erstwhile generative system into an appliance tethered to their business model.¶
This view is picked up and largely confirmed in the latter "Internet for the People" by Tarnoff. Where the earlier book predicts the direction the Internet may evolve in, the latter confirms its evolution.¶
Both authors agree, each in their own terms, that the way to "save" the Internet is to re-focus attention on what made it generative in the first place: it served as a substrate for nothing specific, and so for everything and anything.¶
Where the Internet is perhaps more focused on the problems relating to establishing machine-to-machine communications, the Web has shown us that this "anything and everything" refers to human concerns.¶
2.3. Summary
The Internet and Web both started out as generative; this has led to their respective dominance at the layer each occupies.¶
But generativity also carries a two-fold danger:¶
-
Any generative system can and will be abused to create or acerbate societal issues as explored throughout this section. This lies in the nature of systems that allow "bad" uses as well as "good" ones.¶
-
Both market forces, political motivations, as well as misapplied good intentions will lead towards turning erstwhile generative systems into tethered appliances, often for the "protection" of some part of its user base.¶
At the same time, locking down generative systems -- for any motivation -- is strongly reminiscent of the categories in which the abuses outlined in this section fall. Here, three major categories can easily be identified:¶
-
Centralization, either by itself or as an amplifier the following points.¶
-
Unwarranted access to personally identifiable information (PII).¶
-
Denial of access to data in general (which may include PII).¶
The latter category explicitly locks down a generative system by reducing its genericity to those uses that are permitted. The second category establishes the data set required to make any kind of decision on which data flows to prohibit. Finally, centralization -- as shown in prior parts of this section -- is a key contributor to enabling either.¶
The section also briefly explores that meeting these abuses with legislation can only be part of the answer. One of the reasons is that legislation is slow compared to technological advancement -- but far more sinister is that fact that legislation can itself work against people's best interests.¶
A clear goal of any future architecture with the aim of re-invigorating the generativity of the future Internet must then be to work against these three components that drive of abuse -- while at the same time avoiding measures that would lock down this network in this pursuit.¶
We'll explore properties of the Web and other architectures that contribute to these issues in more detail in [INTERPEER-REQUIREMENTS]. Before that, Section 3 focuses on how the Web and Internet are currently used, which need to be preserved in any alternate proposal.¶
3. Use Cases
This section presents use cases to consider, or rather use case classes. While actual use cases are a powerful motivator, we limit ourselves to one or few as a proxy for an entire class.¶
In Section 2.1, we saw that while the human problems with the Internet occur at several layers, the Web is the layer at which human-to-human interaction is mostly modelled -- a new system must therefore necessarily capture use cases of the Web in order to be a viable alternative; this is reflected in Section 3.1.¶
Additional consideration, which the Webd oes not fulfil well at this time, are contained in Section 3.2.¶
3.1. Web Use Cases
The Web started out with a fairly simple idea -- but the generative nature of it then quickly prompted new uses other than the originally envisioned. We identify three relatively distinct classes of use cases for the current, modern Web.¶
3.1.1. Document Web
The original use of the Web was dissemination of knowledge in the form of publication of documents. This is effectively what the PUT, GET and DELETE methods of HTTP ([RFC7231], Section 4.3) embody: a means to store, retrieve and delete documents from a Web server.¶
The scope of these operations is an entire resource. HTTP permits optional Range requests [RFC7233] to target sub-resources, but here an interesting dynamic prevents its use across a wide variety of use cases.¶
On the one hand, the [REST] architectural style that HTTP implements requires that data transferred be "representational". The idea is that it is up to the service implementor how to persist data, which representations to send, and which to accept. However, it is explicitly not implied that the byte sequences that the service sends and receives are identical to the byte sequences it persists.¶
On the other hand, the Range header operates on byte ranges. Mapping byte ranges of a data representation that differs from the byte ranges of a data storage format onto each is no easy task; it should therefore not be surprising that Range headers are most often used when the representation matches the storage format, i.e. the methods operate on what's commonly described as Binary Large Objects (BLOBs).¶
In summary, the Document Web use case class is best described as one offering simple operations for manipulating entire resources (or their representations). This is likely why the "RESTful" design style (not to be confused with [REST] itself) is characterized by mapping the PUT, GET, POST and DELETE methods directly onto Create, Read, Update and Delete (CRUD) operations. Even though HTTP offers other uses with such extensions as the Range header, it is an uncomplicated mapping, and therefore easily and widely adopted.¶
3.1.2. API Web
The next use case class treats the Web as a remote procedure call (RPC) protocol, in which resources or resource collections are mostly referred to as application programmer interfaces (APIs) today.¶
The API Web is not fundamentally distinct from the Document Web in the HTTP
methods it employs. But API endpoints (resources) no longer represent a document
or document type, but a functionality the client wishes to invoke. As such,
the data representation format chosen tends to reflect the needs of APIs,
where structured data is transmitted, which may refer to multiple other
resources ("connect 'foo' to 'bars' A, B and C"). More rigorous approaches
in the API web identify abstract objects via their resource locators (URLs).¶
APIs, as the name suggest, provide interfaces also between distinct engineering teams. As such, common standards have been created and discarded across the existence of the Web, such as [SOAP] and the currently popular [OPENAPI]. These provide interoperability by layering a protocol for specifying RPC invocations onto the HTTP protocol.¶
In reality, services will usually provide a mixture of API and Document Web functions. The main conceptual distinction between the two use case classes lies in the expectations around the meaning and freshness of a response.¶
Documents are by nature fixed and self-contained. They can be revised, and refer to other documents to understand them. But each revision is effectively a new document, albeit sharing a history with its predecessors.¶
An API response, on the other hand, is ephemeral. It describes the result of an operation. The same operation performed at another point in time may yield a different result.¶
In HTTP, this difference is expressed in caching. The standard provides many headers relating to the longevity or freshness of a response. In Document Web use cases, it can typically be assumed that a response has a fairly long validity -- while in API use cases, this is not a valid assumption.¶
API Web then differs from Document Web in that the resources one accesses represent functions rather than documents, and the responses it produces are ephemeral and contextual rather than self-contained and long-lived.¶
3.1.3. Real-time Web/Streaming
Beyond the Document and API Web, there exists also a class of use cases related to data streaming.¶
Streaming is itself a term with somewhat ambiguous meaning. In our case, let's interpret it as one party in a network transaction consuming some related data (such as a resource), before the other party has finished producing it.¶
Consuming and producing can be understood as receiving and sending data over the network, but may also include processing on either side to create or display the resource in some fashion. That is to say, the sending party does not need to generate data in this view, but may simply not be finished sending it -- live data generation, however, is also included in this defnition.¶
In principle, this can be mapped onto HTTP in arbitrary ways. Range header usage is predestined for a subset of this sort of use, but it is equally possible to structure the resource into individual documents that are requested in sequence. This latter use, for example, is how "HTTP Live Streaming" [RFC8216] segments a streaming resource into individual media segments. Finally, repeated calls to the same or different API endpoints may produce the data incrementally.¶
The precise mapping onto HTTP mechanics barely matters. The main point of this use case class is that there is a real-time component to it in a processing pipeline. Producers of data can produce data only at a certain pace. Transmission is bounded by the available throughput rate of the network path. Consumers may also only render data at a given pace.¶
What distinguishes the Real-time Web use case class from the previous two is that it includes attempts at matching the network throughput rate and latency to the capabilities and expectations of either of consumer, producer or both. This is distinct enough from the other use cases to warrant its own use case class.¶
3.2. Additional Use Cases
There exist a number of use cases for which the Web is not typically adopted, or where adoption poses additional challenges that are not intrinsically resolved within its architecture or implementation. This section explores these in brief.¶
3.2.1. Resilience
Preceding the birth of the Internet, Paul Baran described different communication architectures in [RM3420], which he terms "centralized", "decentralized" and "distributed". He comes to the conclusion that the "distributed" model offers the highest resilience against communications failures, which led to the packet switching paradigm of the Internet.¶
In the "distributed" model, communications nodes have connectivity with multiple other nodes. In order to communicate with any node, the data packets they send can traverse many intermediary nodes. When one such node fails, a different path can be taken.¶
By and large, the Internet remains distributed in nature. A number of incentives may push towards more centralization, but this is less to do with the Internet's architecture than the interests of Internet service providers.¶
The Web, for similar reasons, shows much stronger trends towards centralization. Works such as [RFC9518] assess the specific reasons, as well as what standards can do about them in far more detail than this document can.¶
Centralization introduces real-world risks, as explored in Section 2, some of which directly relate to notions of resilience. One such example would be that centralization often weakens resilience against censorship. If the Web shows tendencies towards centralization, it follows that heightened resilience is a use case that is not typically captured by Web technology -- and yet may help mitigate issues raised in the first part of this document.¶
3.2.2. Remote Locations
There is no particular argument for physical location to factor into Internet or Web usage -- but underlying protocols that facilitate connectivity may suffer in some geographical locations. This has follow-on effects for performance and user experience of the Web stack as a whole, which may negatively affect its suitability for a particular use case.¶
3.2.2.1. (Commercial) Drones
Vehicles (drones) operating in Unmanned Aircraft Systems (UAS) commonly fall into several categories; EASA has standardized these categories within Europe. On one end of the spectrum lie small drones operated via remote control. On the opposite end lie large drones with high payload capacity, typically military in nature. Autonomous vehicles carrying humans are treated in yet another fashion.¶
Projections predict commercial innovation to occur predominantly in between these extremes; under EASA rules, this would be termed the "specific" category.¶
In this category, drones are characterized by several factors. On the one hand, they must be large enough to have a useful carrying capacity, which renders them heavy enough to be dangerous when they fail. On the other hand, they must typically operate Beyond Visual Line of Sight (BVLOS), or else their usefulness is questionable compared to sending a person. Finally, they must be reasonably cost effective to purchase and operate, which strongly suggests that Commercially available Off the Shelf (COtS) components should be used in their manufacture.¶
This proves to be some conundrum to maintaining Command, Control and Communications (C3) links to the vehicles. Suitable technology such as found in mobile devices does not provide the safety requirements mandated for such links by regulators. To mitigate this, failover solutions between such link technologies appear to be the most likely solution [DRONECOMMS].¶
Connectivity of an individual link can fail for a variety of reasons, such as through spectrum interference or physical obstacles -- or simply due to distance. In remote locations, for example, it is reasonable to assume that 802.11 connectivity is not a given, while satellite based systems may be available.¶
3.2.2.2. Internet-of-Things (IoT)
In accessing Things on the Internet, [RFC7252] is modelled after [REST]. This is due to a combination of two assumptions, one being that communications with the Thing itself underlies constraints that do not occur on the rest of the Internet. The other is that REST is the default method for accessing resources.¶
As such, it is not surprising that most IoT architectures envision the Things to be accessed via a gateway node that translates from e.g. HTTP to e.g. CoAP and back. A common scenario is that a number of constrained sensor devices communicate over a limited range to some gateway via a protocol such as CoAP. The gateway then is either permanently or intermittently connected to the cloud, where other machines can query it for (aggregated) sensor data.¶
3.2.2.3. Space Communications
Deep space, as the ultimate remote location, has prompted the development of [RFC4838], the Delay-/Disruption-Tolerant Network (DTN) Architecture and related implementations.¶
Space communications has fundamentally different approaches to latency and intermittency of communications than most earthbound solutions. Whereas in near space such as Low Earth Orbit (LEO), DTN can be used to merely encapsulate regular IP-based traffic to its destination, the same approach may not work when latency exceeds the expectations of the application -- in other words, different approaches for designing applications are needed, which then put into question the use of Internet technology altogether.¶
3.2.2.4. Robotics
Robots have many things in common with UAV, whether terrestrial or in aerospace, and can in some cases also be compared to moving IoT devices. There is, however, good reason to list them as a separate use case.¶
The first is merely to acknowledge that robotics applications are on the rise; therefore, any application that is best classified as general robotics needs more attention.¶
More concretely, however, robotics present an interesting design challenge, which is met by a specific design approach. It should be noted that this is not in practice significantly different from the design of other modern vehicles, but that robotics as a field represents a more modern and generalized approach to the same kind of design approach.¶
Specifically, robots are typically composed of a relatively large number of low-powered compute devices, which themselves are connected to actuators and sensors of various kinds. While the automotive industry produces vehicles in a similar fashion, it employs different standards such as automotive ethernet (part of [IEEE-802.3]) and the older [CAN], etc.¶
In robotics, the de-facto standard today is [ROS2], the "Robot Operating System". It itself builds upon [DDS] for networking. An interesting observation here is that ROS2 leverages DDS as a generic message bus, while it is in the purview of the DDS implementation whether these messages are delivered locally or globally via the Internet.¶
In particular, ROS2 builds heavily on a publish-subscribe paradigm which in terms of Internet technology is best represented by IP multicast. In fact, DDS implementations would likely resort to multicast for networked communications.¶
However, deploying IP multicast solutions at global scale remains an ongoing challenge ([IP-MULTICAST-CHALLENGES]). Similarly, automotive ethernet is a localized solution with no capability for being routed over the Internet (and does not provide native multicast solutions anyway).¶
In practice, robotics systems designed along this paradigm will often work in a specific manner: nodes attached to sensors may regularly publish sensor data, while processing nodes subscribe to such sensor data. Meanwhile actuator nodes might subscribe to commands, which controller nodes may send. Finally, service nodes serve higher level and longer term goals (such as navigation to a particular location), and combine information from processor nodes instruct controllers.¶
These are likely be very localized operations. However, global information such as traffic or weather observation might provide, should be available to such systems in a "native" fashion. Meanwhile, remote pilot assisted systems (compare Section 3.2.2.1) rely on sensor inputs being forwarded to the remote pilot.¶
In summary, robotics present a challenge where their operation crosses into the territory of autonomous vehicles: the design paradigms along which robotics systems are composed are not well served by current Internet technology, because it falls short in providing globally routable publish-subscribe mechanisms in its standardized stack.¶
3.2.2.5. Generalization
The generalization of the issues above, as exemplified by the drone example, is well expressed in DTN: to communicate with "remote" places, communications needs to tolerate high delay as well as high intermittency. The Web does not tolerate either well.¶
As an additional challenge, automation relies on publish-subscribe mechanisms which are not as well supported as point-to-point operations at this time.¶
3.2.3. Energy Usage
Of growing importance is the energy usage of any networked environment. Using renewable energy for e.g. hosting [GREEN-HOSTING] is commendable, but one cannot require green energy usage in a networking protocol; no practical means of enforcement of such rules exists. However, at the protocol design level, it is possible to consider how much energy may be used.¶
The approach, dubbed "green coding", is gaining attention in recent years ([GREEN-CODING-1], [GREEN-CODING-2], [GREEN-CODING-3], etc.). In the realm of networking protocols, the design approach reduces to a relatively simply method: reduce transmissions.¶
In practice, things are not as simple as that: measurements are needed to determine energy usage in a variety of scenarios. But each packet transmitted induces energy usage not only for the transmission itself, but also for the processors performing packet switching decisions. It is clear that reducing the packet transmission rate by design will have an effect on reducing energy usage.¶
One approach to this is to treat caching of remote data as a first class problem, such that transfer can be avoided as much as possible.¶
3.2.4. Data Protection
Increasingly, digital rights are seen as variants of fundamental human rights; in the European Union, the European Parliament and the Council of Europe jointly signed a European Declaration on Digital Rights and Principles, which some researchers consider "transformative" [DIG-RIGHTS].¶
This relatively local legislation reflects a larger trend in tending to digital rights of citizens worldwide, which inevitably influences data protection practices.¶
3.2.4.1. PII and GDPR
In the European Union, the General Data Protection Regulation [GDPR] has set the standard for data protection laws worldwide, with several legislations adopting comparable frameworks. It is focused on protecting Personally Identifiable Information (PII) and requires, amongst other things, that such data may only be collected with "informed consent".¶
A future architecture must be structured such that data sharing occurs with informed consent, or else risks running afoul of such requirements.¶
3.2.4.2. Whistleblower Protection
Similar to the GDPR, in some legislations there exist whistleblower protection laws, such as the German [HinSchG].¶
Using this example, the law requires confidentialy of whistleblower identities, and additionally requires that reports are handled anonymously -- the distinction implies that the report must be devoid of references to a reporter's real world identity. Although such an identity may be confidentially managed in a separate data store, it may not be linked to the report.¶
3.2.4.3. Journalism & Source Protection
Aside from the protection of whistleblowers, the protection of sources of journalists is generally considered to be a fundamental component for press freedom. In Europe, courts have regularly held that reavealing sources would constitute a violation of Article 10 of the European Convention on Human Rights [ECHR], which is concerned with freedom of expression.¶
From a technical point of view, source and whistleblower protection has strong similarities. But it is worth highlighting that the legal frameworks they may refer to can be different, and so impose different requirements.¶
4. Summary
By way of examining current classes of human rights issues that are acerbated or caused by Internet/Web connectivity, this document identifies three main drivers behind such issues in Section 2.3. These are:¶
-
Centralization¶
-
Unwarranted access to personally identifiable information¶
-
Denial of access to data¶
The section concludes that a human-centric networking approach must work against these drivers.¶
Section 3 by contrast explores how the current Internet/Web is used, but also some of the use case classes for which the current Internet/Web stack provides no solutions. These are often, but not always, closely related to the issues raised in the problem statement section.¶
The Web use cases can be broadly summarized in three classes:¶
Classes of uses that this stack fails to address adequately are:¶
-
Resilience, in the face of accidental failures or malicious manipulation.¶
-
Intermittency and latency, i.e. use in remote locations¶
-
Energy usage¶
-
Data protection concerns.¶
The [INTERPEER-REQUIREMENTS] document, provides an analysis framework for networked architectures based on these findings. It then applies this framework to a variety of existing architectures, identifying any gaps that might exist. Finally, it formulates requirements for a human-centric architecture such as put forward in [INTERPEER-ARCHITECTURE].¶
5. References
5.1. Normative References
- [INTERPEER-ARCHITECTURE]
- Finkhaeuser, J., "Architecture for Human-Centric Networking", PIE PIE.f92f09.architecture-00, , <https://specs.interpeer.org/PIE.f92f09.architecture/PIE.f92f09.architecture-00>.
- [INTERPEER-PROBLEM-STATEMENT]
- Finkhaeuser, J., "Problem Statement & Gap Analysis for Human-Centric Networking", PIE PIE.f92f09.problem-statement-00, , <https://specs.interpeer.org/PIE.f92f09.problem-statement/PIE.f92f09.problem-statement-00>.
- [INTERPEER-REQUIREMENTS]
- Finkhaeuser, J., "Gap Analysis & Requirements for Human-Centric Networking", PIE PIE.f92f09.gap-analysis-00, , <https://specs.interpeer.org/PIE.f92f09.gap-analysis/PIE.f92f09.gap-analysis-00>.
- [PIE.f92f09.00]
- Finkhaeuser, J., "PIEs - Proposals for Interpeer Enhancement", PIE PIE.f92f09.00-00, , <https://specs.interpeer.org/PIE.f92f09.00/PIE.f92f09.00-00>.
5.2. Informative References
- [AOSC]
- Zuboff, S., "The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power", ISBN 9781781256855, .
- [AUTOMATED-APARTHEID]
- Amnesty International, "Automated Apartheid", , <https://www.amnesty.org/en/documents/mde15/6701/2023/en/>.
- [CAMBRIDGE-ANALYTICA]
- Hu, M., "Cambridge Analytica’s black box", SAGE Publications, Big Data & Society vol. 7, no. 2, DOI 10.1177/2053951720938091, , <https://doi.org/10.1177/2053951720938091>.
- [CAN]
- ISO/TC 22/SC 31, "Road vehicles - Controller Area Network (CAN)", ISO 11898-1:2024, , <https://www.iso.org/standard/86384.html>.
- [CENSORED-PLANET]
- Sundara Raman, R., Shenoy, P., Kohls, K., and R. Ensafi, "Censored Planet: An Internet-wide, Longitudinal Censorship Observatory", ACM, Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security pp. 49-66, DOI 10.1145/3372297.3417883, , <https://doi.org/10.1145/3372297.3417883>.
- [COGNITIVE-INERTIA]
- McGuire, W., "Cognitive consistency and attitude change.", American Psychological Association (APA), The Journal of Abnormal and Social Psychology vol. 60, no. 3, pp. 345-353, DOI 10.1037/h0048563, , <https://doi.org/10.1037/h0048563>.
- [DDS]
- Object Management Group, "DDS™ - Data Distribution Service", , <https://www.omg.org/spec/DDS/>.
- [DECISION-FATIGUE]
- Pignatiello, G., Martin, R., and R. Hickman, "Decision fatigue: A conceptual analysis", SAGE Publications, Journal of Health Psychology vol. 25, no. 1, pp. 123-135, DOI 10.1177/1359105318763510, , <https://doi.org/10.1177/1359105318763510>.
- [DIG-RIGHTS]
- Cocito, C. and P. de Hert, "The Transformative Nature of the EU Declaration on Digital Rights and Principles: Replacing the Old Paradigm (Normative Equivalency of Rights)", Elsevier BV, DOI 10.2139/ssrn.4341816, , <https://doi.org/10.2139/ssrn.4341816>.
- [DOI]
- "Information and documentation�� Digital object identifier system", BSI British Standards, DOI 10.3403/30177056u, , <https://doi.org/10.3403/30177056u>.
- [DRONECOMMS]
- Finkhäuser, J. and M. Larsen, "Reliable Command, Control and Communication Links for Unmanned Aircraft Systems: Towards compliance of commercial drones", ACM, Proceedings of the 2021 Drone Systems Engineering and Rapid Simulation and Performance Evaluation: Methods and Tools Proceedings pp. 22-28, DOI 10.1145/3444950.3444954, , <https://doi.org/10.1145/3444950.3444954>.
- [ECHR]
- European Court of Human Rights; Council of Europe, "European Convention on Human Rights", , <https://www.echr.coe.int/documents/d/echr/convention_ENG>.
- [FUTURE-INTERNET]
- Zittrain, J., "The Future of the Internet -- And How To Stop It", ISBN 9780300151244, , <https://dash.harvard.edu/handle/1/4455262>.
- [GAZA-GENOCIDE]
- Amnesty International, "Israel/Occupied Palestinian Territory: 'You feel like you are subhuman': Israel's Genocide Against Palestinians in Gaza", , <https://www.amnesty.org/en/documents/mde15/8668/2024/en/>.
- [GAZA-GENOCIDE-ICJ]
- International Court of Justice, "Application of the Convention on the Prevention and Punishment of the Crime of Genocide in the Gaza Strip (South Africa v. Israel)", n.d., <https://www.icj-cij.org/case/192>.
- [GDPR]
- Council of the European Union, "General Data Protection Regulation (GDPR)", EU Regulation 2016/679, , <https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX:32016R0679&qid=1661512309950>.
- [GREEN-CODING-1]
- Rua, R., Fraga, T., Couto, M., and J. Saraiva, "Greenspecting Android virtual keyboards", ACM, Proceedings of the IEEE/ACM 7th International Conference on Mobile Software Engineering and Systems pp. 98-108, DOI 10.1145/3387905.3388600, , <https://doi.org/10.1145/3387905.3388600>.
- [GREEN-CODING-2]
- Pereira, R., Matalonga, H., Couto, M., Castor, F., Cabral, B., Carvalho, P., de Sousa, S., and J. Fernandes, "GreenHub: a large-scale collaborative dataset to battery consumption analysis of android devices", Springer Science and Business Media LLC, Empirical Software Engineering vol. 26, no. 3, DOI 10.1007/s10664-020-09925-5, , <https://doi.org/10.1007/s10664-020-09925-5>.
- [GREEN-CODING-3]
- Ribeiro, F., Abreu, R., and J. Saraiva, "On Understanding Contextual Changes of Failures", IEEE, 2021 IEEE 21st International Conference on Software Quality, Reliability and Security (QRS) pp. 1036-1047, DOI 10.1109/qrs54544.2021.00112, , <https://doi.org/10.1109/qrs54544.2021.00112>.
- [GREEN-HOSTING]
- Karyotakis, M. and N. Antonopoulos, "Web Communication: A Content Analysis of Green Hosting Companies", MDPI AG, Sustainability vol. 13, no. 2, pp. 495, DOI 10.3390/su13020495, , <https://doi.org/10.3390/su13020495>.
- [HinSchG]
- "Hinweisgeberschutzgesetz vom 31. Mai 2023 (BGBl. 2023 I Nr. 140)", DE Hinweisgeberschutzgesetz, HinSchG, , <https://www.gesetze-im-internet.de/hinschg/>.
- [HYBRID-WARFARE]
- Dov Bachmann, S., Putter, D., and G. Duczynski, "Hybrid warfare and disinformation: A Ukraine war perspective", Wiley, Global Policy vol. 14, no. 5, pp. 858-869, DOI 10.1111/1758-5899.13257, , <https://doi.org/10.1111/1758-5899.13257>.
- [IEEE-802.3]
- "IEEE Standard for Ethernet", IEEE, DOI 10.1109/ieeestd.2022.9844436, ISBN ["9781504487252"], , <https://doi.org/10.1109/ieeestd.2022.9844436>.
- [IFTP]
- Tarnoff, B., "Internet for the People", ISBN 9781839762024, .
- [IP-MULTICAST-CHALLENGES]
- Farinacci, D., Giuliano, L., McBride, M., and N. Warnke, "Multicast Lessons Learned from Decades of Deployment Experience", Work in Progress, Internet-Draft, draft-ietf-pim-multicast-lessons-learned-06, , <https://datatracker.ietf.org/doc/html/draft-ietf-pim-multicast-lessons-learned-06>.
- [ISOC-FOUNDATION]
- Internet Society Foundation, "Internet Society Foundation", n.d., <https://www.isocfoundation.org/>.
- [MEDIA-INDEPENDENCE]
- Steele, J., "What Can We Learn From the Short History of Independent Media in Serbia? Radio B92, George Soros, and New Models of Media Development", SAGE Publications, The International Journal of Press/Politics vol. 29, no. 3, pp. 646-666, DOI 10.1177/19401612231170092, , <https://doi.org/10.1177/19401612231170092>.
- [ML-DISINFORMATION]
- Katyal, S., "Artificial Intelligence, Advertising, and Disinformation", Project MUSE, Advertising & Society Quarterly vol. 20, no. 4, DOI 10.1353/asr.2019.0026, , <https://doi.org/10.1353/asr.2019.0026>.
- [ML-VOICE]
- Galyashina, E. and V. Nikishin, "AI Generated Fake Audio as a New Threat to Information Security: Legal and Forensic Aspects", SCITEPRESS - Science and Technology Publications, Proceedings of the International Scientific and Practical Conference on Computer and Information Security pp. 17-21, DOI 10.5220/0010616700003170, , <https://doi.org/10.5220/0010616700003170>.
- [NGI-Assure]
- PNO Digital Srl, "NGI Assure", DOI 10.3030/957073, Grant Agreement ID 957073, , <https://doi.org/10.3030/957073>.
- [NGI0-Discovery]
- Stichting NLNet, "NGI Zero Discovery", DOI 10.3030/825322, Grant Agreement ID 825322, , <https://doi.org/10.3030/825322>.
- [NIST.IR.8366]
- Miller, K., Alderman, D., Carnahan, L., Chen, L., Foti, J., Goldstein, B., Hogan, M., Marshall, J., Reczek, K. K, Rioux, N., Theofanos, M. F, Wollman, D., and National Institute of Standards and Technology (U.S.), "Guidance for NIST staff on using inclusive language in documentary standards", NIST IR (Interagency/Internal Reports) 8366, DOI 10.6028/NIST.IR.8366, , <https://specs.interpeer.org/archive/NIST.IR.8366.pdf>. Archived copy; this document has been withdrawn by NIST due to Executive Order (E.O.) 14151 of the President of the United States of America.
- [OAUTH]
- Hardt, D., Ed., "The OAuth 2.0 Authorization Framework", RFC 6749, DOI 10.17487/RFC6749, , <https://www.rfc-editor.org/rfc/rfc6749>.
- [OPENAPI]
- Miller, D., Ed., Whitlock, J., Ed., Gardiner, M., Ed., Ralphson, M., Ed., Ratovsky, R., Ed., and U. Sarid, Ed., "OpenAPI Specification v3.1.0", , <https://spec.openapis.org/oas/v3.1.0>.
- [REST]
- Fielding, R. T., "Architectural Styles and the Design of Network-based Software Architectures", doctoral dissertation, , <http://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm>.
- [RFC4838]
- Cerf, V., Burleigh, S., Hooke, A., Torgerson, L., Durst, R., Scott, K., Fall, K., and H. Weiss, "Delay-Tolerant Networking Architecture", RFC 4838, DOI 10.17487/RFC4838, , <https://www.rfc-editor.org/rfc/rfc4838>.
- [RFC7231]
- Fielding, R., Ed. and J. Reschke, Ed., "Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content", RFC 7231, DOI 10.17487/RFC7231, , <https://www.rfc-editor.org/rfc/rfc7231>.
- [RFC7233]
- Fielding, R., Ed., Lafon, Y., Ed., and J. Reschke, Ed., "Hypertext Transfer Protocol (HTTP/1.1): Range Requests", RFC 7233, DOI 10.17487/RFC7233, , <https://www.rfc-editor.org/rfc/rfc7233>.
- [RFC7252]
- Shelby, Z., Hartke, K., and C. Bormann, "The Constrained Application Protocol (CoAP)", RFC 7252, DOI 10.17487/RFC7252, , <https://www.rfc-editor.org/rfc/rfc7252>.
- [RFC8216]
- Pantos, R., Ed. and W. May, "HTTP Live Streaming", RFC 8216, DOI 10.17487/RFC8216, , <https://www.rfc-editor.org/rfc/rfc8216>.
- [RFC9330]
- Briscoe, B., Ed., De Schepper, K., Bagnulo, M., and G. White, "Low Latency, Low Loss, and Scalable Throughput (L4S) Internet Service: Architecture", RFC 9330, DOI 10.17487/RFC9330, , <https://www.rfc-editor.org/rfc/rfc9330>.
- [RFC9518]
- Nottingham, M., "Centralization, Decentralization, and Internet Standards", RFC 9518, DOI 10.17487/RFC9518, , <https://www.rfc-editor.org/rfc/rfc9518>.
- [RFC9620]
- Grover, G. and N. ten Oever, "Guidelines for Human Rights Protocol and Architecture Considerations", RFC 9620, DOI 10.17487/RFC9620, , <https://www.rfc-editor.org/rfc/rfc9620>.
- [RM3420]
- Baran, P., "On Distributed Communications: I. Introduction to Distributed Communications Networks", RAND Corporation, DOI 10.7249/rm3420, , <https://doi.org/10.7249/rm3420>.
- [ROS2]
- The ROS Community, "Robot Operating System", n.d., <https://ros.org/>.
- [SOAP]
- World Wide Web Consortium (W3C), "SOAP Version 1.2", , <https://www.w3.org/TR/soap12/>.
- [SURVEILLANCE-CAPITALISM]
- Wikipedia contributors, "Surveillance Capitalism", Wikipedia -- The Free Encyclopedia, , <https://en.wikipedia.org/w/index.php?title=Surveillance_capitalism&oldid=1166575963>.
- [UYGHUR-AI]
- Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., and L. Floridi, "The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation", Springer Science and Business Media LLC, AI & SOCIETY vol. 36, no. 1, pp. 59-77, DOI 10.1007/s00146-020-00992-2, , <https://doi.org/10.1007/s00146-020-00992-2>.
- [UYGHUR-INTERNET]
- Clothey, R. and A. Meloche, "Don’t lose your moustache: community and cultural identity on the Uyghur internet in China", Informa UK Limited, Identities vol. 29, no. 3, pp. 375-394, DOI 10.1080/1070289x.2021.1964783, , <https://doi.org/10.1080/1070289x.2021.1964783>.
- [UYGHUR-WAR]
- ROBERTS, S., "The War on the Uyghurs: China's Internal Campaign against a Muslim Minority", Princeton University Press, DOI 10.2307/j.ctvsf1qdq, ISBN ["9780691202211"], , <https://doi.org/10.2307/j.ctvsf1qdq>.
Acknowledgments
Development of this document started as work undertaken under a grant agreement with the Internet Society Foundation [ISOC-FOUNDATION], but has since seen a number of revisions. Some revisions are inspired by work undertaken under grant agreements from Horizon Europe, specifically [NGI0-Discovery] and [NGI-Assure].¶
Copyright Notice
Copyright (C) the document authors.¶
This work is licensed under a CreativeCommons Attribution-ShareAlike 4.0 International License.¶
Index
-
-
- personally identifiable information
- PII
-