24. Socio-Technical System Design

501 shares
Download PDF

A socio-technical system (STS) is a social system operating on a technical base, e.g. email, chat, bulletin boards, blogs, Wikipedia, E-Bay, Twitter, Facebook and YouTube. Hundreds of millions of people use them every day, but how do they work? More importantly, can they be designed? If socio-technical systems are social and technical, how is computing both at once?

This chapter may be used as part of a STS design course. Hence each part has a set of interesting discussion questions that students can investigate and report back to the class. Anyone wishing to set up a course in the design of social technologies is welcome to use this resource

24.1 Part 1: The evolution of computing

Evolution is systems evolving higher levels.

24.1.1 A short history

The first computer was conceived of as a machine of cogs and gears (Figure 1). It became operational in the 1950s and -60s with the invention of semi-conductors. In the 1970s, a hardware company called IBM (footnote 1) was a computing leader. In the 1980s software became more important, so by the 1990s a software company called Microsoft (footnote 2) took the computing lead, giving ordinary people tools like word-processing. During the 1990s, computing became more personal, as the World-Wide-Web turned Internet URLs into web site names that people could read (footnote 3). Then a company called Google (footnote 4) offered the ultimate personal service, free access to the vast public library we call the Internet, and everyone's gateway to the web became the new computing leader. The 2000s computing evolved yet again, to become a social medium as well as a personal tool. So now Facebook challenges Google, as Google challenged Microsoft, as Microsoft challenged IBM.

Charles Babbage (1791-1871) designed the first automatic computing engines. He invented computers but failed to build them. The first complete Babbage Engine was completed in London in 2002, 153 years

Author/Copyright holder: Courtesy of Jitze Couperus. Copyright terms and licence: CC-Att-SA-2 (Creative Commons Attribution-ShareAlike 2.0 Unported).

Figure 24.1: Charles Babbage (1791-1871) designed the first automatic computing engines. He invented computers but failed to build them. The first complete Babbage Engine was completed in London in 2002, 153 years after it was designed. Difference Engine No. 2, built faithfully to the original drawings, consists of 8,000 parts, weighs five tons, and measures 11 feet. The one pictured above is Serial Number 2 and is located in Silicon Valley at the Computer History Museum in Mountain View, California.

Author/Copyright holder: Courtesy of Jitze Couperus. Copyright terms and licence: CC-Att-SA-2 (Creative Commons Attribution-ShareAlike 2.0 Unported).

Author/Copyright holder: Courtesy of Jitze Couperus. Copyright terms and licence: CC-Att-SA-2 (Creative Commons Attribution-ShareAlike 2.0 Unported).

Figure: Details from Babbage's difference engine

Computing has re-invented itself every decade or so (Figure 2). What began as just hardware became about software, then people, and now communities. A physical machine exchanging electricity became software exchanging information, people exchanging meaning and now communities exchanging memes (footnote 5). The World Wide Web was initially an information web (Web 1.0), then an active web (Web 2.0), now a semantic web (Web 3.0) and is becoming a social web (Web 4.0). Each evolutionary step built on the previous, as social computing needs personal computing, personal computing needs software and software needs hardware. The corresponding evolution of computing design culminates in socio-technical design.

When the software era arrived, hardware continued to evolve but hardware leaders like IBM no longer dominated computing as before. The evolution of computing changed business fortunes by changing what computing is. Selling software makes more money than selling hardware because it changes more often. Web queries are even more volatile, but Google gave a service away for free and then sold advertising around it — it sold its services to those who sold theirs. The business model changed, because selling knowledge is not like selling software. Facebook's business model is still evolving. It now challenges Google because we relate to family and friends more than we query knowledge - social exchanges have more trade potential than knowledge exchange.

The computing evolution

Brian Whitworth and Adnan Ahmad. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0). See section "Exceptions" in the copyright terms below.

Figure 24.2: The computing evolution

Yet friends are just social dyads. It is naive to think that social computing will stop at a unit of two. Beyond friends are tribes, cities, city-states, nations and meta-nations like the European Union. A community isn't like a friend, as one has a friend but belongs to a community. With a world population at seven billion and growing, Facebook's 900 million active accounts are just the beginning. The future is computer support for acting groups, families, tribes, nations and eventually a global community, e.g. a group browser for people to tour the Internet together, commenting to each other as they go. Each could take turns to pick the next site or follow an expert host. If socio-technology is just beginning, we need to understand how it works.

24.1.2 Computing levels

The basis of socio-technical design is general systems theory (Bertalanffy, 1968). It describes what the disciplines of science have in common: sociologists see social systems, psychologists cognitive systems, computer scientists information systems and engineers hardware systems. All refer to systems. In general systems theory, no discipline has a monopoly on science and all are valid. Discipline isomorphies (footnote 6) arise from common system properties, e.g. a social agreement measure that matched a biological diversity measure (Whitworth, 2006). Mechanical, logical, psychological and social systems are studied by engineers, computer scientists, psychologists and sociologists respectively. These perspectives in computing give levels (Table 1). Computing then began at the mechanical level, evolved an information level, then acquired human and community levels.

Level

Examples

Discipline

Community

Norms, culture, laws, zeitgeist, sanctions, roles

Sociology

Personal

Semantics, attitudes, beliefs, feelings, ideas

Psychology

Informational

Programs, data, bandwidth, memory

Computer science

Mechanical

Hardware, motherboard, telephone, FAX

Engineering

Table 24.1: Computing levels as discipline perspectives

Levels also help clarify terminology. In Figure 3, a technology is any tool people build to use, e.g. a spear is a technology (footnote 7). So a hardware device alone is a technology, but information technology (IT) is both hardware and software. Likewise, computer science(CS) (footnote 8) is a hybrid of mathematics and engineering, not either alone. So information technology is not a sub-set of technology, nor is computer science a sub-set of engineering.

Human computer interaction (HCI) is then a person plus an IT system, with physical, informational and psychological levels. Just as IT isn't hardware, so HCI isn't IT, but the child of IT and psychology. HCI links CS to psychology as CS linked engineering to mathematics. HCI introduces human requirements to computing and HCI systems turn information into meaning.

Finally, people can form an online community with hardware, software, personal and community levels. If the first two levels are technical and the last two social, the result is a socio-technical system (STS). If technology design is computing built to hardware and software requirements, then socio-technical design is computing built to personal and community requirements as well. In socio-technical systems, the new "user" of computing is the community (Whitworth, 2009b).

Currently, many terms refer to human factors in computing: Engineers extend the term IT to refer to applications built to user requirements; Business calls people and organizations using computing information systems (IS); Education prefers information communication technology(ICT) to describe computer communication; Health chose the term informatics. Whether your preferred term is IT, IS, ICT or informatics doesn't change the basic idea, that people are now part of computing. This chapter uses the term HCI for consistency (footnote 9).

In this pan-discipline view, all of Figure 3 is computing, whose complexity arises from its discipline promiscuity. Socio-technology then designs a computer product as a social and technical system. Limiting computing to hardware (engineering) or software (computer science) denies its obvious evolution.

Levels in computing are ways to view it, not ways to partition it, e.g. a pilot in a plane is one system with different levels, not a mechanical part (the plane) plus a human part (the pilot). The physical level includes not just the plane body but also the pilot's body, as both have weight, volume etc. The information level isn't just the onboard computer, but also the neuronal processing in the pilot’s brain that generates the qualia (footnote 10) of human experience.

Computer system levels

Brian Whitworth and Adnan Ahmad. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0). See section "Exceptions" in the copyright terms below.

Figure 24.3: Computer system levels

The human level is just a pilot who sees the plane as an extension of his or her body, like extra hands or feet, and computer data like extra eyes or ears. On this level, the pilot is the actor and the plane is just a tool. The information level covers all processing, not just of onboard computers but also of the brain. The physical level is not just the body of the plane but also of the pilot. In an aerial conflict, the tactics of a piloted plane will be different from a computer drone. Finally, a plane in a squadron may do things it would not do alone, e.g. expose itself as a decoy so others can attack the enemy.

24.1.3 The reductionist dream

The reductionist dream, based on logical positivism (footnote 11), is that only the physical level is "real", so everything else must reduce to it. Yet when Shannon and Weaver defined information as a choice between physical options, the options were physical but the choosing wasn't (Shannon and Weaver, 1949). A message physically fixed in one way has by this definition zero information, because the other ways it could have been don't exist physically (footnote 12). It is strange but logically true that hieroglyphics one can't read have in themselves no information at all. It is reader choices that generate information, which until deciphered is unknown. If this were not so, data compression couldn't put the same data in a physically smaller signal, which it can. Information is defined by the encoding, not the physical message. If the encoding is unknown, the information is undefined, e.g. an electronic pulse sent down a wire could be a bit, or a byte (an ASCII “1”), or as the first word of a dictionary, say Aardvark, be many bytes. The information a message conveys depends on the decoding process, e.g. every 10th letter of a text gives a new message. Information doesn't exist physically, as it can't be touched or seen. Physicality is necessary for it, but not sufficient.

That mathematical laws are real even though they aren't concrete is mathematical realism (Penrose, 2005). Mathematics is a science because its constructs are logical, not because they are physical. They are real because we conceive them, not because they physically exist. That they are later physically useful is another matter. Cognitive realism is the case that cognitions are also real because we experience them. Mathematical or cognitive constructs defined in physical terms become empirical (footnote 13), and so the feedback loop of science still works, e.g. fear measured by heart rate is a cognitive construct measured in physical terms. Yet fear isn't just heart-rate, as it can also be measured by pupil dilation, blood pressure, etc. Even terms like "red" aren't physical facts as the light spectrum is continuous, with no red frequency section.

The physical level alone is what it is. It has no choices so has no information, i.e. reductionism denies information science. In physics, it gave a clockwork universe, where each state perfectly defined the next. Quantum theory flatly denied this, as quantum events are by definition random, i.e. explained by no physical history. Either quantum theory is wrong, which it has never been, or reductionism, that only the physical is real, is a naive nineteenth century assumption that has had its day. If all science were physical, all science would be physics, which it is not.

Physics today has a quantum level, i.e. a primordial non-physical (footnote 14) reality below physical reality (Whitworth, 2011). Yet long ago, the great 18th Century German philosopher Kant argued that reality is just a view, that we don't see things as they are in themselves (Kant, 1999) (footnote 15). Levels return the observer to science, as quantum theory's measurement paradoxes demand. In philosophy, psychology, mathematics, computing and quantum physics, levels apply (footnote 16).

24.1.4 Science as a world view

Computing levels as abstract views

Brian Whitworth and Adnan Ahmad. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0). See section "Exceptions" in the copyright terms below.

Figure 24.4: Computing levels as abstract views

A level is a world view, a way of seeing complete and consistent in itself. In the mechanical view, a computer is all hardware, but in the informational view it is all data. One can't point to a program on a motherboard nor a device in a data structure. A mobile phone doesn't have hardware and software parts, but is hardware or software in toto. Hardware and software are ways to look at it, not ways to divide it up. Hardware becomes software when we view computing in a different way. The switch is as one swaps glasses to see the same object close-up. The disciplines of science are world views, like walking around an object to see it from different perspectives.

Levels are a fact of science, e.g. to describe World War II as a "history" of atomic events would be ridiculous. A political summary is more useful. Yet levels emerge from each other, as higher abstractions form from lower ones (Figure 4). Information needs hardware choices, cognitions need information flows, and communities need common cognitions. Conversely, without physical choices there is no information, without information there are no cognitions and without cognitions there is no community (footnote 17).

A world view has properties, like being:

  1. Essential.One cannot view a world without first having a point of view.

  2. Empirical. Based on world interaction, e.g. information is empirical.

  3. Complete. A world view consistently describes a whole world.

  4. Subjective. One chooses a view before viewing, explicitly or implicitly.

  5. Exclusive. One can't view two ways at once, as one can't sit in two places at once. (footnote 18)

  6. Emergent. One world view can emerge from another.

Levels as views must be chosen before viewing, i. e. pick a level then view.

Yet how we see the world affects how we act, e.g. if we saw ultra-violet light, as bees do, previously dull flowers would become bright. Every flower shop would have to change its stock. Levels as higher ways to view a system are also new ways to operate and design it, e.g. new software protocols like Ethernet can improve network performance as much as new cables.

New ways to view computing affect how we build it, and how social levels affect technology design is socio-technical design. Level requirements cumulate, so socio-technical design includes hardware, software and HCI requirements (Figure 5). What appears as just hardware now has requirements outside itself, e.g. smart-phone buttons mustn't be too small for people's fingers. Levels are why computer design has evolved from hardware engineering to socio-technology.

Computing applications and levels

Brian Whitworth and Adnan Ahmad. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0). See section "Exceptions" in the copyright terms below.

Figure 24.5: Computing applications and levels

For a village beside a factory, community needs come second to factory productivity, with ethics an after-thought, but for socio-technology, the community and the technology are one. If social needs are not met there is no community, and if there is no community the technology fails to perform as expected. Socio-technical design is the application of community requirements to people, software and hardware. The following sections derive each computing level from the previous.

24.1.5 From hardware to software

Hardware is any physical computer part, e.g. mouse, screen or case. It doesn't "cause" software nor is software a hardware output, as physical systems have physical outputs. We create software by seeing choice in physicality. Software needs hardware but it isn't hardware, as the same code can run on a PC, Mac or mobile phone. An entity-relationship diagram can work for any physical storage, whether disk, CD or USB, as data entities aren't disk sectors. Software assumes some hardware but no specific one.

If any part of a device acquires software, the whole system gets an information level, e.g. a computer is information technology even though its case is just hardware. We describe a system by its highest level, so if the operating system "hangs" (footnote 19) we say "the computer" crashed, even though the computer hardware is working fine. Rebooting fixes the software problem with no hardware change, so a software system can fail while the hardware still works perfectly.

Conversely, a computer can fail as hardware but not software, if a chip overheats. Replace the hardware part and the computer works with no software change needed. Software can fail without hardware failing and hardware can fail without software failing. New hardware needn't change software and new software needn't change hardware. Each level has its own performance requirements: if software fails we call a programmer, but if hardware fails we call an engineer.

Software requirements can be met by hardware operations, e.g. reading a logical file takes longer if the file is fragmented, as the drive head must jump between physically distant disk sectors. Defragmenting a disk improves software access by putting files in adjacent physical sectors. File access improves, but the physical drive read rate hasn't changed, i.e. hardware actions can meet software goals, e.g. database and network requirements gave new hardware chip commands. The software goal, of better information throughput, also becomes the hardware goal, e.g. physical chip design today is as much about caching and co-processing as it is about cycle rate.

24.1.6 From software to HCI

HCI began with the personal computing era. Adding people to the computing equation meant that getting technology to work was only half the problem - the other half was getting people to use it. Web users who didn't like a site just clicked on. Web sites that got more hits succeeded because given equal functionality, users chose the more usable product (Davis, 1989), e.g. Word replaced Word Perfect because it was more usable - users who took a week to learn Word Perfect picked up Word in a day. As computing previously gained a software level, it now gained a human level.

Human-computer interaction (HCI) is a person using IT, as IT is software using hardware. As computer science merges mathematics and engineering, but is neither, so HCI merges psychology and computer science, but is neither. Psychology is the study of people, and computer science the study of software, but the study of people using software, or HCI, is new. It is another computing discipline that cuts across other disciplines. HCI applies psychology to computing design, e.g. Miller's paper on cognitive span suggests limiting computer menu choices to seven (Miller, 1956). Our many senses and multi-media computing is another example of a human requirement defining computing.

24.1.7 From HCI to STS

Social structures, roles and rights add a fourth level to computing. Socio-technical design uses the social sciences in computing design as HCI uses psychology. STS is not part of HCI, nor is sociology part of psychology, because a society is more than the people in it, e.g. East and West Germany, with similar people, performed differently as communities, as is true for North and South Korea today. To say "the Jews" survived but "the Romans" didn't is to say that the society didn't continue, not its people, as no Roman era people are alive today. A society is not just the people in it. People who gather to view a spectacle or customers coming to shop for bargains, are not a community. A community is here an agreed form of social interaction that persists (Whitworth and de Moor, 2003).

Social interactions can have a physical or a technical base, e.g. a socio-physical system is people connecting by physical means. Face-to-face friendships cross seamlessly to Facebook because the social level persists across physical and electronic architecture bases. Whether electronically or physically mediated, a social system is always people interacting with people. Electronic communication may be “virtual”, but the people involved are real.

The computing requirements hierarchy

Brian Whitworth and Adnan Ahmad. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0). See section "Exceptions" in the copyright terms below.

Figure 24.6: The computing requirements hierarchy

A community works through people using technology as people work through software using hardware, so social requirements are now part of computing design (Sanders and McCormick, 1993). While sociology studies the social level alone, socio-technical design studies how personal and social requirements can be met by IT system design. Certainly this raises the cost of development, but then systems like social networks have far more performance potential.

24.1.8 The computing requirements hierarchy

The evolution of computing implies a requirements hierarchy (Figure 6). If the hardware works software becomes the priority, if the software works user needs arise, and when user needs are met social requirements follow. As one level's issues are met those of the next appear, as climbing one hill reveals another. As hardware over-heating problems are solved, software data locking problems arise. As software response times improve, user response times become the issue. Companies like Google and E-bay still seek customer satisfaction, but customers in crowds have social needs like fairness and synergy. As computing evolves, higher levels come to drive success. In general, the highest level of a system defines its success, e.g. social networks need a community to succeed. If no community forms, it doesn't matter how easy to use, fast or reliable the software is. Lower levels are essential to avoid failure, but higher levels are essential to success.

Level

Requirements

Errors

Community

Reduce community overload, clashes, increase productivity, synergy, fairness, freedom, privacy, transparency.

Unfairness, slavery, selfishness, apathy, corruption, lack of privacy.

Personal

Reduce cognitive overload, clashes, increase meaning transfer efficiency.

User misunderstands, gives up, is distracted, or enters wrong data.

Informational

Reduce information overload, clashes, increase data processing, storage, or transfer effeciency

Processing hangs, data storage full, network overload, data conflicts.

Mechanical

Reduce physical heat or force overload. Increase heat or force efficiency.

Overheating, mechanical fractures or breaks, heat leakage, jams.

Table 24.2: Computing errors by system level

Conversely, any level can cause failure, e.g. it doesn’t matter how high community morale is if the hardware fails, the software crashes or the interface is unusable. An STS fails if its hardware fails, if its program crashes or if users can’t figure it out. Hardware, software, personal and community failures are all computing errors (Table 2). The one thing they have in common is that the system fails to perform, and in evolution, what doesn't perform doesn't survive.

When computing was just technology, it only failed for technical reasons, but now it is socio-technology; it can also fail for social reasons. Technology is hard, but society is soft. That the soft should direct the hard seems counter-intuitive, but trees grow at their soft tips not their hard base. As a tree trunk doesn't direct its expanding canopy, so today's social computing was undreamt of by its technical base.

24.1.9 Design combinations

Remote controls for Apple products are good examples of HCI Design

Author/Copyright holder: Courtesy of Ocrho. Copyright terms and licence: pd (Public Domain (information that is common property and contains no original authorship)).

Figure 24.7.A: Remote controls for Apple products are good examples of HCI Design

Remote controls for televisions are not.

Author/Copyright holder: Unknown (pending investigation). Copyright terms and licence: Unknown (pending investigation). See section "Exceptions" in the copyright terms below.

Figure 24.7.B: Remote controls for televisions are not.

Design fields combine different requirements and design levels, as in Table 3:

  1. Ergonomics is the design of safe and comfortable machines for people. To design technology to human body needs like posture and eye-strain merges biology and engineering.

  2. Object design, as defined by Norman, applies psychological needs to mechanical design (Norman, 1990), e.g. a door's design affects whether it is pushed or pulled. An affordance is a physical object design that cues its use, as a button cues pressing. Physical systems designed to human requirements work better. In World War II, planes crashed until engineers designed cockpit controls to the cognitive needs of pilots as follows (with computing examples):

    1. Put the control by the thing controlled, e.g. a handle on a door (context menus).

    2. Let the control “cue” the required action, e.g. a joystick (a 3D screen button).

    3. Make the action/result link intuitive, e.g. press a joystick forward to go down, (press a button down to turn on).

    4. Provide continuous feedback, e.g. an altimeter, (a web site breadcrumbs line).

    5. Reduce mode channels, e.g. altimeter readings, (avoid edit and zoom mode confusions).

    6. Use alternate sensory channels, e.g. warning sounds, (error beeps).

    7. Let pilots "play", e.g. flight simulators, (a system sandbox).

  3. Human computer interaction applies psychological requirements to software design. Usable interfaces respect cognitive principles, e.g. by the nature of human attention, users don’t usually read the entire screen. HCI turns psychological needs into IT designs as architecture turns buyer needs into house designs. Compare Steve Jobs' IPod to a television remote (Figure 7). Both do the same job (footnote 20) but one is a cool tool and the other a mass of buttons. One was designed to engineering requirements and the other to human needs. Which then performs better?

  4. Fashion is the social need to look good applied to object design. In computing, a mobile phone can be a fashion accessory, just like a hat or handbag. Its role is to impress, not just to function. Aesthetic criteria apply when people buy mobile phones to be trendy or fashionable, so color is as important as battery life in mobile phone selection.

  5. Socio-technology, the social design of information technology, applies social requirements to software design. Anyone online can see the power of socio-technology but most see it as an aspect of their specialty. Sociologists study society as if it were apart from physicality, which it is not. Technologists study technology as it were apart from community, which it is not. Only socio-technology studies how the social links to the technical, as a new discipline.

Field

Target

Requirements

Example

STS

IT

Community ...

Wikipedia, YouTube, E-bay

Fashion

Accessory

Community ...

Mobile phone as an accessory

HCI

IT

Personal ...

Framing, border contrast, richness

Design

Technology

Personal ...

Keyboard, mouse

Ergonomics

Technology

Biological ...

Adjustable height screen

Table 24.3: Design fields by target and requirement levels

In Figure 8, higher level requirements filter down to affect lower level operation and design. This higher affects lower principle is that higher levels directing lower ones improves system performance. Any level requirement can translate down, e.g. communities require agreement to act, which at the citizen level gives norms, at the informational level laws and at the physical level cultural events. The same applies online, e.g. online communities make demands of netizens (footnote 21) as well as hardware. STS design then is about having it all: reliable devices, efficient code, intuitive interfaces and sustainable communities.

Computing requirements cumulate

Brian Whitworth and Adnan Ahmad. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0). See section "Exceptions" in the copyright terms below.

Figure 24.8: Computing requirements cumulate

In physical society, over thousands of years, families formed tribes, tribes formed city states, city-states formed nations states, and nations formed nations of nations, each with more complex social structures (Diamond, 1998). The social level in Figure 8 isn't just one step, as social units can form bigger social units (footnote 22) to get new requirements (Whitworth and Whitworth, 2010).

24.1.10 The flower of computing

The evolution of computing involves four main specialties (Figure 9), but pure engineers see only mechanics, pure computer scientists only information, pure psychologists only cognitions and pure sociologists only social structures. So computing as a whole isn't pure, yet this hybrid is the future because performance isn't about purity, as practitioners understand (Raymond, 1999).

The four stages of computing

Brian Whitworth and Adnan Ahmad. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0). See section "Exceptions" in the copyright terms below.

Figure 24.9: The four stages of computing

The kingdom of computing is a realm divided, as academics specialize to get publications, grants and promotions (Whitworth and Friedman, 2009). Specialties guard their knowledge in journal castles with jargon walls, like medieval fiefdoms, but in doing so hold hostage knowledge, that by its nature should be free. This division also disguises and limits the growth of computing. Every day more people use more computers to do more things in more ways but computing staff rarely get critical mass, because engineering, computer science, health (footnote 23), business, psychology, mathematics and education all compete for the computing crown (footnote 24). A realm divided is weak, and will get weaker if music, art, journalism, architecture etc. also set up outposts. Computing faculty scatter over the academic landscape like the tribes of Israel, some in engineering, some in computer science, some in health, etc. Yet we are one. Mathematics split up like this would be equally dilute.

The flower of computing is borne of many disciplines but belongs to none. It is a new discipline in itself (Figure 10). For it to bear research fruit, its academic parents must set it free. Let us trade knowledge not dominate it. Using different terms, models and theories for the same subject invites confusion. Universities that split computing research into small groups, isolated by discipline boundaries, distance themselves from its multi-disciplinary future. Until computing research becomes one, computing theory will remain as it is now - decades behind computing practice.

The flower of computing

Brian Whitworth and Adnan Ahmad. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0). See section "Exceptions" in the copyright terms below.

Figure 24.10: The flower of computing

24.1.11 Discussion questions

Research selected questions from the list below. If you are reading this chapter as part of a class - either at university or a commercial course - you can research these questions in pairs and report back to the class, with reasons and examples.

  1. How has computing evolved since it began? Is it just faster machines and better software? What is the role of hardware companies like IBM and Intel in modern computing?

  2. How has the computing business model changed as it evolved? Why does selling software make more money than selling hardware? Can selling knowledge make even more money? What about selling friendships? Can one sell communities?

  3. Is a kitchen table a technology? Is a law a technology? Is an equation a technology? Is a computer program a technology? Is an information technology (IT) system a technology? Is a person an information technology? Is an HCI system (person plus computer) an information technology? What, exactly, isn't a technology?

  4. Is any set of people a community? How do people form a community? Is a socio-technical system (an online community) any set of HCI systems? How do HCI systems form an online community?

  5. Is computer science part of engineering or of mathematics? Is human computer interaction (HCI) part of engineering, computer science or psychology? Is socio-technology part of engineering, computer science, psychology or one of the social sciences? (footnote 25)

  6. In an aircraft, is the pilot a person, a processor, or a physical object? Can one consistently divide the aircraft into human, computer and mechanical parts? How can one see it?

  7. What is the reductionist dream? How did it work out in physics? Does it recognize computer science? How did it challenge psychology? Has it worked out in any discipline?

  8. How much information does a physical book, that is fixed in one way, by definition, have? If we say a book "contains" information, what is assumed? How is a book's information generated? Can the same physical book "contain" different information for different people? Give an example.

  9. If information is physical, how can data compression put the same information in a physically smaller signal? If information is not physical, how does data compression work? Can one encode more than one semantic stream into one physical message? Give an example.

  10. Is a bit physical "thing"? Can you see or touch a bit? If a signal wire sends a physical "on" value, is that always a bit? If a bit isn't physical, can it exist without physicality? How can a bit require physicality but not itself be physical? What creates information, if it is not the mechanical signal?

  11. Is information concrete? If we can't see information physically, is the study of information a science? Explain. Are cognitions concrete? If we can't see cognitions physically, is the study of cognitions (psychology) a science? Explain. What separates science from imagination if it isn't physicality?

  12. Give three examples of other animal species who sense the world differently from us. If we saw the world as they do, would it change what we do? Explain how seeing a system differently can change how it is designed. Give examples from computing.

  13. If a $1 CD with a $1,000 software application on it is insured, what do you get if it is destroyed? Can you insure something that is not physical? Give current examples.

  14. Is a "mouse error" a hardware, software or HCI problem? Can a mouse's hardware affect its software performance? Can it affect its HCI performance? Can mouse software affect HCI performance? Give examples in each case. If a wireless mouse costs more and is less reliable, how is it better?

  15. Give three examples of a human requirement giving an IT design heuristic. This is HCI. Give three examples of a community requirement giving an IT design heuristic. This is STS.

  16. Explain the difference between a hardware error, a software error, a user error and a community error, with examples. What is the common factor here?

  17. What is an application sandbox? What human requirement does it satisfy? Show an online example.

  18. Distinguish between a personal requirement and community requirement in computing. Relate to how STS and HCI differ and how socio-technology and sociology differ. Why can't sociologists or HCI experts design socio-technical systems?

  19. What in general to people do if their needs aren't met by a physical situation? What do users do if their needs aren't met online? What is the difference? What do citizens of a physical community do if it doesn’t meet their needs? What about an online community? Again, what is the difference? Give specific examples to illustrate.

  20. According to Norman, what is ergonomics? What is the difference between ergonomics and HCI? What is the difference between HCI and STS?

  21. Give examples of: Hardware meeting engineering requirements. Hardware meeting Computer Science requirements. Software meeting CS requirements. Hardware meeting psychology requirements. Software meeting psychology requirements. People meeting psychology requirements. Hardware meeting community requirements. Software meeting community requirements. People meeting community requirements. Communities meeting their requirements. Which are computing design

  22. Why is an IPod so different from TV or video controls? Which is better and why? Why has TV remote design changed so little in decades? If TV and the Internet compete for the hearts and minds of viewers, who will win?

  23. How does an online friend differ from a physical friend? Can friendships transcend physical and electronic interaction architectures? Give examples. How is this possible?

  24. Why do universities spread computing researchers across many disciplines? What is a cross-discipline? What past cross-disciplines became disciplines. Why is computing a cross-discipline?

24.2 Part 2: Design spaces

All my cuts are the best (said by a butcher to a housewife who asked him for the best cuts).

The previous section reviewed computing system levels, this one reviews constituent parts.

24.2.1 The elephant in the room

The beast of computing has regularly defied pundit predictions. Key advances like the cell-phone (Smith et al, 2002) and open-source development (Campbell-Kelly, 2008) weren't predicted by the experts of the day, though the signs were there for all to see. As experts pushed media-rich systems, lean text chat, blogs, texting and wikis took off. Even today, people with rich video-phones still text. Google's simple white screen scooped the search engine field not Yahoo's multi-media graphics. In gaming, the innovation was social gaming not virtual reality helmets. Investors in Internet bandwidth lost money when the future wasn't all video.

In computing, that practice leads but theory bleeds has a long history. Over thirty years ago, paper was declared "dead", by the electronic paperless office (Toffler, 1980). Yet today, paper is used more than ever before. James Martin saw program generators replacing programmers, but today, we still have a programmer shortage. A “leisure society” was supposed to arise as machines took over our work, but today, we are less leisured than we ever were (Golden and Figart, 2000). The list goes on: email was supposed to be for routine tasks, the Internet was supposed to collapse without central control, video was supposed to replace text, teleconferencing was supposed to replace air travel, AI smart-help was supposed to replace help-desks, and so on.

We get it wrong time and again, because computing is the elephant in our living room. We can't see it because it is too big. In the story of the blind men and the elephant, one grabbed its tail and found it like a rope and bendy, another took a leg and declared it fixed like a pillar, a third felt an ear and thought it like a rug and floppy, while the last seized the trunk, and found it like a pipe but very strong (Sanai, 1968). Each saw a part but none saw the whole. How can one see an elephant by analyzing its toenails? (footnote 26)

24.2.2 Design requirements

To design a system is to find problems early, e.g. a misplaced wall on an architect's plan can be moved by the stroke of a pen, but design needs performance requirements, like efficiency. Requirements engineering analyzes stakeholder needs, to specify what a system must do for them to sign off on the end product. It is basic to system design:

The primary measure of success of a software system is the degree to which it meets the purpose for which it was intended. Broadly speaking, software systems requirements engineering (RE) is the process of discovering that purpose...

-- Nuseibeh and Easterbrook, 2000: p. 1

A requirement can be a particular value (e.g. uses SSL), a range of values (e.g. less than $100), or a criterion scale (e.g. is secure). Given a system's requirements, designers can build it, but for computing, the literature can't agree on what they are. One text has usability, repairability, security and reliability (Sommerville, 2004, p. 24) but the ISO 9126-1 quality model has functionality, usability, reliability, efficiency, maintainability and portability (Losavio et al, 2004). Berners-Lee made scalability a World Wide Web criterion (Berners-Lee, 2000) while others stress open standards between systems (Gargaro et al, 1993). Business criteria are cost, quality, reliability, responsiveness and conformance to standards (Alter, 1999), but software architects prefer portability, modifiability and extendibility (de Simone and Kazman, 1995). Others espouse flexibility (Knoll and Jarvenpaa, 1994) and privacy (Regan, 1995). On the issue of what computer systems need to succeed, the literature is at best confused. This gives what developers call the requirements mess (Lindquist, 2005), that has ruined many a software project. It is the problem that agile methods address.

In current theories, each specialty sees only itself. Security specialists see security as availability, confidentiality and integrity (OECD, 1996), so to them, reliability is part of security. Reliability specialists see dependability as reliability, safety, security and availability (Laprie and Costes, 1982), so to them security is part of a general reliability concept. Yet both can't be generally true. Similarly, a usability review finds functionality and error tolerance part of usability (Gediga et al, 1999) while a flexibility review finds scalability, robustness and connectivity aspects of flexibility (Knoll and Jarvenpaa, 1994). In academia, each specialty expands to fill the theory space around it.

Yet there is recognition that no specialty is the be all or end all:

The face of security is changing. In the past, systems were often grouped into two broad categories: those that placed security above all other requirements, and those for which security was not a significant concern. But ... pressures ... have forced even the builders of the most security-critical systems to consider security as only one of the many goals that they must achieve.

-- Kienzle and Wulf, 1998: p5

Analyzing performance goals in isolation is giving diminishing returns.

24.2.3 Design spaces

Architect Christopher Alexander observed that vacuum cleaners with powerful engines and more suction were also heavier, noisier and cost more (Alexander, 1964). One performance criterion has a best point, but two criteria, like power and cost, give a best line. The efficient frontier of two performance criteria is the maximum of one for a value of the other (Keeney and Raiffa, 1976). A system design is choosing a many value point in a multi-dimensional design space, of many combinations. So there are many "best" points, e.g. a cheap, heavy but powerful vacuum cleaner, or light, expensive and powerful one (Figure 11). The efficient frontier of a design space is a surface of "best" combinations (footnote 27). Advanced system performance is not a one dimensional ladder to excellence, but a station with many trains to many destinations.

A vacuum cleaner design space

Brian Whitworth and Adnan Ahmad. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0). See section "Exceptions" in the copyright terms below.

Figure 24.11: A vacuum cleaner design space

Designing in a multi-dimensional space gives many "best" points, so nature has no best animal. Successful life includes flexible viruses, reliable plants, social insects and powerful tigers, with the latter endangered. In evolution, not just the strong are fit and over specialization can even lead to extinction. Likewise, computing has no "best". If computer performance was just about processing we would all want supercomputers, but laptops with less power perform better for some (David et al, 2003). Blindly adding software functions gives bloatware (footnote 28), applications full of features that no-one uses.

Design is then the art of reconciling many requirements in a system form, e.g. a quiet, reliable, cheap and powerful vacuum cleaner. It is the innovative synthesis of a performance form in a requirements space (Alexander, 1964). It isn't one dimensional, e.g. Berners-Lee chose HTML for the World Wide Web for its flexibility (across platforms), reliability and usability (easy to learn). An academic conference rejected his WWW proposal because HTML was inferior to SGML (Standard Generalized Markup Language). Specialists saw their specialty, not system performance. Even after the World Wide Web's phenomenal success, their blindness remained:

Despite the Web’s rise, the SGML community was still criticising HTML as an inferior subset ... of SGML

-- Berners-Lee, 2000: p96

What has changed since academia found the World Wide Web "inferior"? Not a lot. If it is any consolation, an equally myopic Microsoft also found it "unprofitable". In system design, a focus on any one criterion gives diminishing returns, whether it is functionality, security (OECD, 1996), extendibility (Simone and Kazman, 1995), privacy (Regan, 1995), usability (Gediga et al., 1999) or flexibility (Knoll and Jarvenpaa, 1994). Improving one aspect alone can even reduce performance, i.e. "bite back" (Tenner, 1997), e.g. a network so secure that no-one uses it. Advanced system performance does not result from one dimensional design.

24.2.4 Non-functional requirements

In traditional requirements engineering, criteria like usability are quality requirements that affect functional goals but can't stand alone (Chung et al, 1999). For decades, these non-functional requirements (NFRs), or “-ilities”, were considered second class requirements. They defied categorization, except to be non-functional. How exactly they differed from functional goals was never made clear (Rosa et al, 2001), yet most modern systems have more lines of interface, error and network code than functional code, and increasingly fail for "unexpected" non-functional reasons (footnote 29) (Cysneiros and Leite, 2002, p. 699).

The logic is that NFRs like reliability can't exist without functionality, so are subordinate to it. Yet by the same logic, functionality can't exist without reliability, e.g. a car that won't start has no speed function, nor does a car that is stolen or can't be driven. NFRs don't just modify performance they define it. In nature, functionality isn't the only key to success, e.g. viruses hijack the functionality of other system's. Functionality differs from other system requirements only in being more obvious to us. It is really just one of many requirements. The distinction between functional and non-functional requirements is our bias, like seeing the sun going round the earth because we are on the earth.

24.2.5 Constituent parts

In general systems theory, any system consists of:

  1. Parts, and

  2. Interactions.

But are software parts lines of code, variables or sub-programs? Let a system's elemental parts be those not formed of other parts. A mechanic stripping a car stops at the bolt element, as to decompose it further gives atoms, which are no longer mechanical. Each level has a different elemental part: physics has quantum strings, information has bits, psychology has qualia, and society has citizens (Table 4). Elemental parts then form complex parts as bits form bytes.

Level

Elemental part

Other parts

Community

Citizen

Friendships, groups, organizations, societies.

Personal

Qualia

Cognitions, attitudes, beliefs, feelings, theories.

Informational

Bit

Bytes, records, files, commands, databases.

Physical

Quantum strings?

Quarks, electrons, nucleons, atoms, molecules.

Table 24.4: System parts by level

Let a system's constituent parts be those that interact to form the system but are not part of other parts (Esfeld, 1998). So, disconnecting a car entirely gives elemental parts not constituent parts, e.g. a bolt on a wheel isn't a constituent because it is part of the wheel.

To say a body is composed of cells ignores its structure: how elemental parts form constituent parts. Only in system heaps, like a pile of sand, are elemental parts also constituent parts. The body's constituent parts are the digestive system, the respiratory system, etc, not its cells. Just sticking together arbitrary physical parts, like head, arms, and legs, gives the Frankenstein effect (footnote 30) (Tenner, 1997).

24.2.6 Holism and specialization

The performance of a system of parts that interact isn't defined by decomposition alone. Even simple parts, like air molecules, can interact strongly to form a chaotic system like the weather (Lorenz, 1963). Gestalt psychologists called the whole being more than its parts holism, as a curve is just a curve but in a face becomes a "smile". Holism is how system parts change by interacting with others. Holistic systems are individualistic, because changing one part, by its interactions, can cascade to change the whole system drastically. People rarely look the same because one gene change can change everything. The brain is also holistic - one thought can change everything you know.

Yet a system's parts needn't be simple. The body began as one cell, a zygote, that divided into all the cells of the body, including liver, skin, bone and brain cells (footnote 31). Likewise in early societies most people did most things, but today we have millions of specialist jobs. A system's specialization (footnote 32) is the degree its parts differ in form and action, especially constituent parts.

Holism (complex interactions) and specialization (complex parts) are hallmarks of evolved systems, giving both levels and constituent specializations.

24.2.7 General performance requirements

Requirements engineering aims to define a system’s purposes. If levels and constituent specializations change those purposes, how can requirements engineering succeed? The answer proposed here is to take the view of the system itself, specifying requirements for different levels and constituent specializations. How these are reconciled is then the art of system design.

A system interacts with its environment to perform, i.e. to gain value and avoid loss in order to survive. In Darwinian terms, what doesn't survive fails and what does succeeds. So a system needs a boundary to exist apart from the world and an internal structure to support and manage that existence. It needs effectors to act upon the environment around it and receptors to monitor the world for risks and opportunities.

Constituent

Requirement

Definition

Boundary

Security

To deny unauthorized entry, misuse or takeover by other entities.

Extendibility

To attach to or use outside elements as system extensions.

Structure

Flexibility

To adapt system operation to new environments

Reliability

To continue operating despite system part failure

Effector

Functionality

To produce a desired change on the environment

Usability

To minimize the resource costs of action

Receptor

Connectivity

To open and use communication channels

Privacy

To limit the release of self information by any channel

Table 24.5: System performance requirements by constituent specialty

So as cells evolved they first got a boundary membrane, then organelle and nuclear structures for support and control, then eukaryotic cells evolved flagella to move and protozoa got photo-receptors (Alberts et al, 1994). We also have a skin boundary, metabolic and brain structures, muscle effectors and sense receptors, like the eye. Computers also have a case boundary, a motherboard internal structure, printer or screen effectors and keyboard or mouse receptors. Four constituent specializations by risk and opportunity goal options gives eight performance requirements (Table 5). The details are as follows:

  1. Boundary constituents manage the system boundary. They can be designed to deny outside things entry (security) or to use them (extendibility). In computing, virus protection is security and system add-ons are extendibility (Figure 24.12). In people, the immune system gives biological security and tool-use illustrates extendibility.

  2. Structure constituents manage internal operations. They can be designed to limit internal change to reduce faults (reliability), or to allow internal change to adapt to outside changes (flexibility). In computing, reliability reduces and recovers from error and flexibility is the system preferences that allow customization. In people, reliability is the body fixing a cell "error" that might cause cancer, while the brain learning illustrates flexibility.

  3. Effector constituents manage environment actions, so can be designed to maximize effects (functionality) or minimize resource use (usability). In computing, functionality is the menu functions, while usability is how easy they are to use. In people, functionality gives muscle effectiveness and usability is metabolic efficiency.

  4. Receptor constituents manage signals to and from the environment, so can be designed to open communication channels (connectivity) or close them (privacy). Connected computing can download updates or chat online, while privacy is the power to disconnect or log off. In people, connectivity is conversing and privacy is the legal right to be left alone. In nature, privacy is camouflage, and the military calls it stealth.

Every system is somehow created, which takes effort both for applications that are built or organisms that are born. A system's ability to reproduce is important but outside the current scope, as apart from virus programs, few computer systems do this.

These general system criteria map well to current terms (Table 6). They apply at any level, but as what is exchanged changes, so do the names used:

  1. Hardware systems exchange energy. So “functionality” is power, i.e. hardware with high CPU cycle or disk read-write rates. “Usable” hardware uses less power for the same result, e.g. mobile phones that last longer. Reliable hardware is rugged enough to work if you drop it, and flexible hardware is mobile to still work if you move around, i.e. change environments. Secure hardware blocks physical theft, e.g. by laptop cable locks, and extendible hardware has ports for peripherals to be attached. Connected hardware has wired or wireless links and private hardware is tempest proof i.e. it doesn’t physically leak energy.

  2. Software systems exchange information. Functional software has many ways to process information, while “usable” software uses less CPU processing (“lite” apps). Reliable software avoids errors or recovers from them quickly. Flexible software is operating system platform independent. Secure software can't be corrupted or overwritten. Extendible software can access OS program library calls. Connected software has protocol "handshakes" to open read/write channels. Private software can encrypt information so others can't see it.

  3. HCI systems exchange meaning, including ideas, feelings and intents. In functional HCI the human computer pair is effectual, i.e. meets the task goal. Usable HCI requires less intellectual, affective or conative (footnote 33) effort, i.e. is intuitive. Reliable HCI avoids or recovers from unintended user errors by checks or undo choices — the web Back button is an HCI invention. Flexible HCI lets users change language, font size or privacy preferences, as each person is a new environment to the software. Secure HCI avoids identity theft by user password. Extendible HCI lets users use what others create, e.g. mash-ups and third party add-ons. Connected HCI communicates with others, while privacy includes not getting spammed or being located on a mobile device.

Each level applies the same ideas to a different system view. The community level is covered later.

24.2.7.1 GSR Criterion
24.2.7.2 Related Criteria

Functionality

Effectualness, capability, usefulness, effectiveness, power, utility.

Usability

Ease of use, simplicity, user friendliness, efficiency, accessibility.

Extendibility

Openness, interoperability, permeability, compatibility, standards.

Security

Defense, protection, safety, threat resistance, integrity, inviolable.

Flexibility

Adaptability, portability, customizability, plasticity, agility, modifiability.

Reliability

Stability, dependability, robustness, ruggedness, durability, availability.

Connectivity

Networkability, communicability, interactivity, sociability.

Privacy

Tempest proof, confidentiality, secrecy, camouflage, stealth, encryption.

Table 24.6: Related performance criteria

Mozilla/Firefox add-ons

Copyright © Mozilla. All Rights Reserved. Used without permission under the Fair Use Doctrine (as permission could not be obtained).  See section "Exceptions" in the copyright terms below.

Figure 24.12: Mozilla/Firefox add-ons

24.2.8 A general system design space

The above gives the general system design space of Figure 13, where for a particular system:

  • The area is the overall performance requirements met, i.e. performance in general.

  • The shape is the requirement weights, defined by the environment.

  • The lines are design requirement "tensions" (see below).

A general system performance design space

Brian Whitworth and Adnan Ahmad. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0). See section "Exceptions" in the copyright terms below.

Figure 24.13: A general system performance design space

This space has active requirement that enhance opportunities (footnote 34) and passive ones that reduce risks (footnote 35), where taking opportunities is as important in performance as reducing risk (Pinto, 2002). Criteria weights vary by environment, so security is more important in threat environments and extendibility is better in opportunity environments (Whitworth et al, 2008). These performance criteria are general because they have no inherent contradictions, e.g. a bullet-proof plexi-glass room can be secure but not private, while encrypted files can be private but not secure. Reliability provides services but security denies them (Jonsson, 1998), so a system can be reliable but insecure, unreliable but secure, unreliable and insecure or reliable and secure. Functionality needn’t deny usability (Borenstein and Thyberg, 1991) or connectivity privacy. Cross-cutting requirements (Moreira et al, 2002) can be reconciled by innovative design if they are logically modular, so one can get both.

24.2.9 Design tensions and innovations

A design tension is when satisfying one design requirement denies another. Applying different requirements to the same constituent gives a design tension, e.g. castle walls that protect against attacks but need a gate to receive supplies, or computers impenetrable to virus attacks that need plug-in software hooks.These contrasts are not anomalies, but built into the nature of systems.

Design tensions begin slack for new systems, but increase as performance improves. Eventually, like stretched rubber bands, the system becomes so "tight" that advancing one requirement can easily pull back another, or more than one. In the version 2 paradox, development effort spent improving a successful product can decrease its performance!

To expand a performance web, one can't just pull one corner, e.g. in 1992, Apple CEO Sculley introduced the hand held Newton, claiming that portable computing was the future. We now know he was right, yet in 1998 Apple dropped the line due to poor sales. The Newton’s small screen made data entry hard, i.e. the portability gain was nullified by a usability loss. Only when Palm’s Graffiti language improved handwriting recognition, did the personal digital assistant market revive. Sculley's innovation was only half the answer - the other half was resolving the usability problems created by increasing flexibility. Innovative design must meet specialist requirements and resolve design tensions.

24.2.10 Project development

24.2.10.1
24.2.10.2 Constituent
24.2.10.3 Code
24.2.10.4 Requirement
24.2.10.5 Analysis
24.2.10.6 Testing

Actions

Application

Functionality

Task

Business

Interface

Usability

Usability

User

Interactions

Authorization

Security

Threat

Penetration

Plug-ins

Extendibility

Standards

Compatibility

Changes

Error recovery

Reliability

Stress

Load

Preferences

Flexibility

Contingency

Situation (Beta)

Interchanges

Network

Connectivity

Channel

Communication

Rights

Privacy

Legitimacy (footnote 36)

Community

Table 24.7: Project development specializations by constituent

The days when programmers could list a system's functions then just code them are gone, if they ever existed. Today, design involves not only many specialties but also their interaction. A system development could involve up to eight specialists, with distinct requirements, analysis and testing (Table 7). Smaller systems might have four (actions, interactions, changes and interchanges), two (opportunities and risks) or just one (performance). Design tensions are reduced by agile methods, where specialists talk to each other and stakeholders, but system development also needs innovators, people to cut across specialist boundaries to resolve cross-cutting design tensions.

24.2.11 Discussion questions

Research selected questions from the list below. If you are reading this chapter as part of a class - either at university or a commercial course - you can research these questions in pairs and report back to the class, with reasons and examples.

  1. What three widespread computing expectations didn't happen? Why not? What three unexpected computing outcomes did happen? Why?

  2. What is a system requirement? How does it relate to system design? How do system requirements relate to performance? Or to system evaluation criteria? How can one specify or measure system performance if there are many factors?

  3. What is the basic idea of general systems theory? Why is it useful? Can a cell, your body, and the earth all be considered systems? Describe Lovelock’s Gaia Hypothesis. How does it link to both General Systems Theory and the recent film Avatar? Is every system contained within another system (environment)?

  4. Does nature have a best species? If nature has no better or worse, how can species evolve to be better? Or if it has a better and worse, why is current life so varied instead of just the “best”? (footnote 37) Does computing have a best system? If it has no better or worse, how can it evolve? If it has a better and worse, why is current computing so varied? Which animal actually is “the best”?

  5. Why did the electronic office increase paper use? Give two good reasons to print an email in an organization. How often do you print an email? When will the use of paper stop increasing?

  6. Why wasn't social gaming predicted? Why are MMORPG human opponents better than computer ones? What condition must an online game satisfy for a community to "mod" it (add scenarios)?

  7. In what way is computing an "elephant"? Why can't it be put into an academic "pigeon hole"? (footnote 38) How can science handle cross-discipline topics?

  8. What is the first step of system design? What are those who define what a system should do called? Why can't designers satisfy every need? Give examples from house design.

  9. Is reliability an aspect of security or is security an aspect of reliability? Can both these things be true? What are reliability and security both aspects of? What decides which is more important?

  10. What is a design space? What is the efficient frontier of a design space? What is a design innovation? Give examples (not a vacuum cleaner).

  11. Why did the SGML academic community find Tim Berners-Lee's WWW proposal of low quality? Why didn't they see the performance potential? Why did Microsoft also find it “of no business value”? How did the WWW eventually become a success? Given that business and academia now use it extensively, why did they reject it initially? What have they learned from this lesson?

  12. Are NFRs like security different from functional requirements? By what logic are they less important? By what logic are they equally critical to performance?

  13. In general systems theory (GST), every system has what two aspects? Why doesn't decomposing a system into simple parts fully explain it? What is left out? Define holism. Why are highly holistic systems also individualistic? What is the Frankenstein effect? Show a "Frankenstein" web site. What is the opposite effect? Why cant “good” system components just be stuck together?

  14. What are the elemental parts of a system? What are its constituent parts? Can elemental parts be constituent parts? What connects elemental and constituent parts? Give examples.

  15. Why are constituent part specializations important in advanced systems? Why do we specialize as left-handers or right-handers? What about the ambidextrous?

  16. If a car is a system, what are its boundary, structure, effector and receptor constituents? Explain its general system requirements, with examples. When might a vehicle's "privacy" be a critical success factor? What about its connectivity?

  17. Give the general system requirements for browser application. How did its designers meet them? Give three examples of browser requirement tensions. How are they met?

  18. How do mobile phones meet the general system requirements, first as hardware and then as software?

  19. Give examples of usability requirements for hardware, software and HCI. Why does the requirement change by level? What is "usability" on a community level?

  20. Are reliability and security really distinct? Can a system be reliable but insecure, unreliable but secure, unreliable and insecure, or reliable and secure? Give examples. Can a system be functional but not usable, not functional but usable, not functional or usable, or both functional and usable? Give examples.

  21. Performance is taking opportunities and avoiding risks. Yet while mistakes and successes are evident, missed opportunities and mistakes avoided aren't. Explain how a business can fail by missing an opportunity, with WordPerfect vs Word as an example. Explain how a business can succeed by avoiding risks, with air travel as an example. What happens if you only maximize opportunity? What if you only reduce risks? Give examples. How does nature both take opportunities and avoid risks? How should designers manage this?

  22. Describe the opportunity enhancing general system performance requirements, with an IT example of each. When would you give them priority? Describe the risk reducing performance requirements, with an IT example of each. When would you give them priority?

  23. What is the Version 2 paradox? Give an example from your experience, of software that got worse on an update. You can use a game example. Why does this happen? How can designers avoid this?

  24. Define extendibility for any system. Give examples for a desktop computer, a laptop computer and a mobile device. Give examples of software extendibility, for email, word processing and game applications. What is personal extendibility? Or community extendibility?

  25. Why is innovation so hard for advanced systems? Why stops a system being secure and open? Or powerful and usable? Or reliable and flexible? Or connected and private? How can such diverse requirements ever be reconciled?

  26. Give two good reasons to have specialists in a large computer project team. What happens if they disagree? Why are cross-disciplinary integrators also needed?

24.3 Part 3: Socio-technical design

Let the social define the technical

Social ideas like freedom seem far removed from computer code but computing today is social. That technology designers aren't ready, have no precedent or don't recognize social needs is irrelevant. Like a baby being born, online society is pushing forward, ready or not. And like new parents, socio-technical designers are causing it, whether they want to or not. As the World Wide Web's creator observes:

... technologists cannot simply leave the social and ethical questions to other people, because the technology directly affects these matters

-- Berners-Lee, 2000: p124

The online reality is that how people interact in socio-technical systems depends entirely on the software.

24.3.1 Designing work management

The term socio-technical was first introduced by the Tavistock Institute (footnote 39) in the late 1950’s to oppose Taylorism - reducing jobs to efficient elements on assembly lines in mills and factories. Community level needs gave work-place management ideas like (Porra and Hirschheim, 2007):

  1. Congruence. A process must match its objective - democratic results need democratic means.

  2. Minimize control. Give employees clear goals, but let them decide how to achieve them.

  3. Local control. Let those experiencing a problem change the system, not absent managers.

  4. Flexibility. Without "extra" skills to handle change, specialization will precede extinction.

  5. Boundary innovation. Innovate at the boundaries, where work goes between groups.

  6. Transparency. Give information first to those it affects, e.g. give work rates to workers.

  7. Evolution. Work system development is an iterative process that never stops.

  8. Lead by example. Chinese saying: "If the General takes an egg, his soldiers will loot a village." (footnote 40)

  9. Support human needs. Work that lets people learn, choose, feel and belong gives loyal staff.

In computing it became a call for the ethical use of technology. Yet social needs apply to technology design as well as to work management. Technology that mediates social interactions must also satisfy social needs. In the industrial revolution, “dark satanic mills” enslaved people, so technology was the enemy. Yet people ran those factories. It was the rich oppressing the poor, as always, with machines just letting them do it better. Technology is an effect magnifier, i.e. it isn't in itself good or evil. The people of nineteenth century Britain rejected slavery (footnote 41)but embraced car and phone technologies. In today's information revolution we “love” technology. It is on the other side of the class war, as Twitter, Facebook and YouTube support the Arab spring. Yet the core socio-technical principle is the same:

(footnote 42). Yet America and England somehow got democracy, and now it is unclear why our predecessors ever settled for less. Democracies out-produce autocracies as free people do more and online is no different (Beer and Burrows, 2007). Communities perform by improving social interactions, which happens when citizens do what they should - not what they can.

24.3.2 Social requirements

One can't design socio-technology in a social vacuum. Fortunately, while virtual society is new people have been socializing for thousands of years. We know that fair communities prosper but corrupt ones don't (Eigen, 2003). Social inventions like laws, fairness, freedom, credit and contracts were bought with blood and tears (Mandelbaum, 2002), so why start anew online? Why reinvent the social wheel in cyber-space (Ridley, 2010)? Why re-learn electronically what we already know physically, if the social level in both cases is the same?

When nuclear technology magnified the physical power of war, humanity had a choice: to destroy itself physically by nuclear holocaust, or not. We didn't destroy ourselves by choice, not by technology, which just upped the ante. As the new bottle of information technology fills with the old wine of society, the stakes are raised again. Today’s information revolution vastly increases the power to gather, store and distribute information, for good or ill (Johnson, 2001). We can be hunter-gatherers of the information age or an online civilization (Meyrowitz, 1985). Yet a stone-age society with space-age technology isn’t a good mix.

In general, we are “environment blind”. We don’t see social environments not because they are too far away but because they are too close. As a fish is the last to see water, or a bird the air, so we can’t see social environments. Yet if technology is to support civilization, it must specify its requirements. Computing can’t implement what it can’t specify. We live in social environments every day, but struggle to specify them (footnote 43), e.g. a shop-keeper swipes a credit card with a reading device designed to not store data like credit card number or pin. It is designed to the social requirement that shopkeepers don't steal from customers, even if they can. Without this, credit would collapse and a social failure, or depression, can be worse than a natural disaster. In sum, credit card readers support social trust by design.

Likewise, if online systems take and sell customer data like home address and phone for advantage, users will lose trust, and either refuse to register at all, or register with fake data, like "123 MyStreet, MyTown, NJ" (Foreman and Whitworth, 2005). The key to online privacy is not storing data. To say it will never be revealed isn't good enough, as companies can be forced by governments or bribed by cash to reveal data. One can't be forced or bribed to give data one doesn't have. The best way to guarantee online trust is to not to store unneeded information in the first place (footnote 44).

24.3.3 The socio-technical gap

The socio-technical gap

Brian Whitworth and Adnan Ahmad. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0). See section "Exceptions" in the copyright terms below.

Figure 24.14: The socio-technical gap

Socio-technical design is the application of community requirements to people and software and hardware. Pure technical design gives a socio-technical gap (Figure 14), between what technology supports and what people want (Ackerman, 2000), e.g. designing email to let anyone message anyone without permission gave the spam problem. Filters help on a personal level but transmitted spam as a system problem has never stopped growing. While inbox spam is constant, due to filters, transmitted spam grew from 20% to 40% in 2002-2003 (Weiss, 2003), to 60-70% in 2004 (Boutin, 2004), to from 86.2% to 86.7% of the 342 billion emails sent in 2006 (MAAWG, 2006; MessageLabs, 2006), to 87.7% of spam in 2009 and 89.1% in 2010 (MessageLabs, 2010). A 2004 prediction that within a decade over 95% of all emails transmitted by the Internet will be spam is coming true (Whitworth and Whitworth, 2004).

Filters address spam as a user problem, but it is really a community problem. Transmitted spam uses Internet processing, bandwidth and storage whether users behind their filter walls see it or not. Only socio-technology can resolve social problems like spam, because in the "spam wars", technology helps both sides, e.g. image spam can bypass text filters, AI can solve captchas (footnote 45), botnets can harvest web site emails, and zombie sources can send emails. So spam isn't going away any time soon (Whitworth and Liu, 2009a).

Aliens visiting our planet might suppose our email system was build for machines, as most of the messages it transmits go from one computer (spammer) to another computer (filter), untouched by human eye.This result is not just bad luck. A communication technology isn't a Pandora's box, unknown until opened, because we built it. Spam happens when we build technologies instead of socio-technologies.

24.3.4 Legitimacy analysis

In politics, a legitimate government is seen as rightful by its citizens, i.e. accepted. In contrast, illegitimate governments need force of arms and propaganda to stay in power. By extension, legitimate interaction is accepted by the parties involved, who freely repeat it, e.g. fair trade. Legitimacy has been specified as: fairness and public good (Whitworth and de Moor, 2003). Physical and online citizens prefer legitimate communities because they perform better socially.

In physical society, legitimacy is maintained by laws, police and prisons, that punish criminals. Legitimacy is the human concept by which judges create new laws and juries decide on never before seen cases. The higher affects lower principle applies here: communities engender human ideas like fairness, which generate informational laws, that are used to govern physical interactions. Communities affect people to create rules to direct acts that benefit the community, i.e. higher level goals drive lower level operations to improve system performance. Doing so online, applying social principles to technical systems, is socio-technical design.

Conversely, over time laws get a "life of their own" and the tail wags the dog, e.g. copyright laws designed to encourage innovators are now just a tool to perpetuate corporate profit (Lessig, 1999) (footnote 46). Unless continuously "re-invented" at the human level, laws inevitably decay. Today's online society is a social evolution as well as a technical one. The social Internet is a move to community goals like service and freedom, so to reduce it to a hawker market place would be its devolution. So let the old ways of business, politics and academia be changed by the Internet, not the other way around.

One can't just “stretch” physical laws into cyberspace (Samuelson, 2003) because they often:

  1. Don't transfer (Burk, 2001), e.g. what is online “trespass”?

  2. Don't apply, e.g. what law applies to online "cookies" (Samuelson, 2003)?

  3. Change too slowly, e.g. laws change in years but code changes in months.

  4. Depend on code (Mitchell, 1995), e.g. anonymity means actors can't be identified.

  5. Have no jurisdiction. U.S. law applies to U.S. soil but cyber-space isn't "in" America.

Legitimacy analysis

Author/Copyright holder: Unknown (pending investigation). Copyright terms and licence: Unknown (pending investigation). See section "Exceptions" in the copyright terms below.

Figure 24.15: Legitimacy analysis

There are no shortcuts here, as to repeat the past isn’t progress. To get legitimacy online we must build it in again, because online code is law (Lessig, 1999).The software that mediates online interaction has control, e.g. any application could upload any hard drive file on your computer to any server. In itself, code could create a perfect online police state, where everyone is monitored, all "wrong" acts punished and all "undesirables" excluded, i.e. a tyranny of code.

Yet code is also an opportunity to be better than the law, based on legitimacy analysis (Figure 15). Physical justice, by its nature, operates after the fact, i.e. a person must commit a crime to be punished. With appeals, this can take years, and justice delayed is justice denied. In contrast, code as the online environment itself, acts right away. It can be designed to enable social acts not just deny anti-social ones. Socio-technical systems that are legitimate by design perform better socially (Whitworth and de Moor, 2003).

Saying that technology supporting social requirements, like fairness, improves system performance is the radical core of socio-technical design. So is every STS designer an application law-giver, like Moses coming down from the mountain with tablets of code instead stone? Not quite, as STS directives are to software not people. Telling people to choose rightly is the job of ethics. The job of right code, like right laws, is to allow what is legitimate, not to enforce choices on people. Socio-technical design is socializing technology not technologizing society, the higher directing the lower not the reverse.

To achieve online what laws do offline, STS developers must re-invoke legitimacy for each application. It seems hard but every citizen on jury service already interprets the “spirit of the law” for complex physical cases. STS design is the same but for application cases. That the result isn't perfect doesn't matter. Cultures have different laws and ethics but all have some laws and ethics, because some social requirements are always better than none.

Yet to build a society as one does a house is wrong. Social engineering by coercion, propaganda or indoctrination is a few enforcing their will on the many. Yet a community by definition is many people working together, so an elite few enslaving the rest isn't a community. To socially engineer a community is to treat people like bricks in a wall. It denies freedom and accountability, which are social requirements. Communities can’t be “built” because their parts are actors. They just emerge as people interact.

24.3.5 The web of social performance requirements

Communities interact with others, using spies as "eyes", diplomats to communicate, engineers to effect, soldiers to defend, intellectuals to adapt and traders to extend, but a community can also interact with itself, to communicate or synergize, as follows:

  1. Productivity. Previously, functionality was what a system can do. What communities do is to produce bridges, dams, art, science, etc. This productivity is based on citizen competence, which education systems increase. Help and FAQ systems do the same online.

  2. Synergy. Previously, usability was less effort per result. Communities do this by synergy, by citizens giving to others (footnote 47). Public goods like roads and hospitals are specialists giving what they do well to all. If everyone in a community specializes, and offers their services to others, all get more for less. Wikipedia is synergy, as many give a little knowledge to all get a lot.

Productivity and synergy are in tension, as one invokes competition and the other cooperation (footnote 48). One improves what citizen "parts" do, the other how they interact. Service by free-good citizens reconcile them, as free citizens raise productivity and good citizens increase synergy. Free-goodness combines the invisible hand of the market and the visible hand of public good (Whitworth and Whitworth, 2010).

  1. Freedom. Previously, flexibility was changing a system to fit the environment. A community gains flexibility by giving citizens freedom, i.e. the right to not be a slave (footnote 49). It allows local resource control to increase performance, as do decentralized network protocols like Ethernet.

  2. Order. Previously, reliability was a system's ability to survive internal part failure or error. A community gets reliability by order, that citizens, by rank, role or job, know and do their duty. Some cultures set up warrior or merchant castes to achieve this. Online order is also by roles, e.g. Sysop or Editor.

Freedom and order are in tension, as freedom has no class but order does. Democracy merges freedom and order, as free citizens select an order hierarchy, not just of President or Prime Minister, but for all positions. Democracy is rare online, but Slashdot uses it.

  1. Ownership. Previously, security was a system's defense against outside takeover. A community is secure internally by ownership, e.g. to "own" a house guarantees that if another takes it, the community will step in (footnote 50). Online, ownership works by access authorization.

  2. Openness. Previously, extendibility was a system using what is outside itself. A community doing this was America's invitation to the world:

Give me your tired, your poor, your huddled masses yearning to breathe free.

A society is open internally if any citizen can achieve any role by merit, as Abraham Lincoln, borne in a log cabin, became US president. The opposite is nepotism or cronyism, giving jobs to family or friends. If community advancement is by who you know not what you know performance reduces. Open source systems like Source Forge let people advance by merit.

Ownership and openness are in tension, as the right to keep out denies the right to go in. Fairness can reconcile public access and private control. Offline fairness is based on justice systems but online it is supported by code.

  1. Connectivity. Previously, connectivity was the ability to open communication channels. Communities connect internally by media like TV, newspapers, radio, and now the Internet. A centrally controlled press is propaganda, while a free press lets everyone put a point of view.

  2. Privacy. Previously, privacy was a citizen's right to control information about themselves. It is the ownership of self-data, not secrecy, so it includes the right to make personal data public.

Connectivity and privacy are in tension, as opening a channel to connect can reveal personal data. Transparency illustrates a combination, as public officials are entitled to privacy, except if acting for a community. Transparency is a citizens right to see governance on their behalf, including money spent and privileges given.

In summary, a community must increase citizen competence to be productive, increase trust and deny crime to get synergy, give freedoms to adapt and innovate, establish order to define responsibilities, allocate ownership to prevent property conflicts, be open to talent outside and inside (footnote 51), be connected to generate agreement, and grant privacy to relieve citizens from the pressure of social interaction. All these increase social performance and prosperity.

24.3.6 Synergy

Social synergy arises when people work to create each other's outcomes. It isn't just people adding efforts, say to lift a heavy log together. Positive synergy is the majority adding value to others and negative synergy is reducing it, e.g. war. Trade is mutual synergy, when my acts give your benefits, e.g. a fisherman who trades fish for a farmer's grain turns excess into value. Each gives an extra they don't really need for a deficit they lack. Modern prosperity arises when specialists share, and specialists produce nearly everything we use (footnote 52).

Synergy is even greater for information, as one can give information to others without losing it oneself. As connected communities grow and work at higher levels, they produce more but synergize much more. Productivity adds with size but synergy multiplies, because it depends on the number of interactions, not the number of citizens. Synergy is the key to prosperity in large connected societies (Wright, 2001)because it "expands the pie", making every slice larger. In contrast, zero-sum gains like war expand one slice at another's expense. Communities that generate synergy are "civilized".

Game theory, the formal calculation of personal gain and loss in social interactions, points out the fly in this social ointment. If my acts make your gain and yours make mine, what if I take from you and give nothing back? In fact, on the personal level, it always pays to defect, e.g. for a seller to give shoddy goods or for a buyer's check to bounce. But if the cheated "sucker" doesn't repeat the interaction, both lose their synergy gains, so cheaters destroy their own success. Synergy is destroyed by anti-social defections, or crime. Social dilemmas are common in society, e.g. social loafing, the volunteer dilemma and the tragedy of the commons (footnote 53). The predicted equilibrium is that all parties defect (Poundstone, 1992), i.e. that synergy is unstable. The mystery isn't why people don't trust but why they do.

The answer proposed here is that people evolve a community sense, when it doesn't pay to defect, e.g. a community overgrazing its commons loses a valuable resource forever (footnote 54). Social dilemmas can't be solved at the personal level, as an honest person among cheats is just a sucker. Only community level action changes the social unit and the gain-loss equation, as explained in detail elsewhere (Whitworth and Whitworth, 2010).

As people, we struggle to see social acts are hard on a community level. A theft that is "good" for a robber is "bad" for the victim, but for a community, theft is always bad. Why spend thousands of dollars in police, court and prison costs to prosecute a hundred dollar theft? For a community, it is a good deal, as crimes that succeed create copycats. The main reason people cheat is because "everyone is doing it" (Callahan, 2004), so one defection can snowball into a social collapse, i.e. no synergy (footnote 55). Giuliani's clean up of crime in New York (footnote 56) cost millions but the community synergy gain was billions.

Purpose

Examples

Synergy

Defection

Communicate

Email, Chat, ListServ, IM

Shared communication: People send more useful messages

Spam: Spammers waste others time, giving spam filters.

Learn

Moodle Blackboard

Shared learning: Students help others learn, reduce bottlenecks

Plagiarism: Student copying gives systems like Turnitin.com.

Know

Wikipedia, Tiddlywiki

Shared knowledge: Taps group knowledge, not just a few experts

Trolls: Wikipedia’s monitors fight knowledge “trolls”.

Friend

Facebook, Myspace

Relationships: People keep in touch with friends and family

Predation: Social networks report and banish predators

Keep current

Digg, Del.icio.us

Shared bookmarks: Social bookmarks let people see trends.

Advocates: Who “digg” a web site they own.

Play

Second Life, Sims

Shared play: Avatars experience things impossible in reality.

Bullies/Thieves: Newbies robbed by veterans need “safe” areas.

Trade

E-Bay, Craig’s List, Amazon

Item trading: People from anywhere exchange more goods.

Scams: Scams are reduced by online reputation systems.

Work

Monster

Work trading: People find and offer work more easily.

Faking: Padded CVs and fake job offers need reputation systems.

Download

Webdonkey, Bit-Torrent

Shared down-loading: Groups share processing downloads.

Piracy: Prosecutions by society’s copyright laws.

Publish

Flickr, YouTube

Shared experience: People share photos and videos.

Offensiveness: Editors remove items that offend.

Advice

Help boards AnandTech

Technical advice: People who have solved problems help others

Confusers: People who ask questions before checking old ones are scolded.

Discuss

Slashdot, Boing-Boing

Shared views: People comment and read others opinions easily

Caviling: Karma systems deselect those who just “peck” new ideas

Follow

Twitter

Forms a group view by linking leaders and followers.

Identity theft. A leader's online persona can be hijacked.

Table 24.8: Socio-technical synergies and defections

Socio-technical systems not only deny defections, but also enable synergies (Table 8). Forums like AnandTech illustrate this, as if anyone in a group solves a problem everyone can get the answer. The larger the group, the more likely someone can solve in seconds a problem you have struggled with for days. Same again functions let Amazon readers use the experiences of others to find books bought by those who bought the book they are looking at now. Wikipedia users correct errors of fact, supply references and examples to everyone.

Synergy reduces when citizens work to personal requirements like:

"Take what you can and give nothing back"

Synergy increases when citizens follow community ethics like:

"Give unto others as you would they give unto you".

Personal ethics is community pragmatics because without the former there is no social synergy, and without synergy there is no community prosperity. If synergy gains return to the people who generate them, the society will be stable. Previously, only heroes, of art, science, music, politics or other, gave to society. Today, socio-technology lets us all be "small heroes", giving back to a community that gives to us. The miracle of socio-technology is that people will help others for no personal gain whatsoever (footnote 57).

24.3.7 Communication performance

Linkage types (S = Sender, R = Receiver)

Brian Whitworth and Adnan Ahmad. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0). See section "Exceptions" in the copyright terms below.

Figure 24.16: Linkage types (S = Sender, R = Receiver)

Communication media transmit meaning between senders and receivers. Meaning is any change in a person's thoughts, feelings or intents. Communication performance is then the total meaning exchanged by a transmission, i.e. its sum human impact.

Richness. Part of communication performance is richness, the amount of meaning a message conveys. To see video as automatically richer than text confuses meaning richness with information bandwidth. Meaning is the impact on a person, so texting "I'm safe" can have more meaning (footnote 58)than a multi-media marketing video. Hence video phones didn't immediately replace audio phones and lean texting is still used. Media richness can thus be classified by the symbols that generate meaning, as follows:

  1. Position. A single, static symbol, e.g. to raise one's hand.

  2. Document. Many static symbols that form a pattern with meaning, as words form a sentence by syntax or as pixelsform an objectby gestalt principles. Documents are text or pictures.

  3. Dynamic-media (Audio). A dynamic channel with multiple semantic streams, e.g. speech has tone of voice and content (footnote 59). Music has melody, rhythm and timbre.

  4. Multi-media (Video). Many dynamic channels, e.g. video is audio and visual channels. Face-to-face communication uses many sensory channels.

One expects richer media to have the potential to transfer more meaning.

Linkage. The meaning exchanged also depends on the number of senders and receivers, i.e. on linkage (Figure 16), which can be:

  1. Interpersonal (one-to-one, two-way): Both parties can send and receive, usually signed.

  2. Broadcast (one-to-many, one-way): From one sender to many receivers, can be unsigned.

  3. Matrix (many-to-many, two-way): Many senders to many receivers, usually unsigned.

As people have interpersonal communication so communities communicate group-to-group by matrix communication. This most powerful linkage is when many send and many receive in one transmit operation. It combines one-to-many (broadcast) and many-to-one (merging) communication (Figure 16). Addressing an audience is one-to-many communication, applauding a speaker is many-to-one, and an applauding audience to itself is matrix communication. In the latter case, the group producing the clapping message also receives it. Matrix communication allows normative influence, so audiences can start and stop clapping together. A choir singing is matrix communication, so when choirs go off key, they usually do so together.

Face-to-face groups use matrix communication, as body language and facial expressions convey everyone's position on an issue. A valence index, calculated from member position indicators, can predict a group discussion outcome as well as the words (Hoffman and Maier, 1961). So online electronic groups can form social agreement using only anonymous, lean, many-to-many signals, with no rich information exchange or discussion (Whitworth et al, 2001). Community voting, as in an election, is a physically slow matrix communication that computers can speed up. Tag cloud, reputation system and social book-mark technologies all illustrate online support for matrix communication.

If communication performance is richness and linkage, a tyranny bombarding citizens 24/7 with TV video propaganda is low linkage (one-to-many) while people talking freely via text blogs is high linkage (many-to-many), i.e. the latter may communicate more.

24.3.8 Communication media

Table 9 shows various communication media by richness and linkage, with electronic forms in italics, e.g. a phone call is an interpersonal audio but a letter is interpersonal text. A book is a broadcast document, but radio is broadcast audio and TV is broadcast video. The Internet can broadcast documents (web sites), audio (podcasts) or videos (YouTube). Email allows two-way interpersonal text messages, while Skype adds two-way audio and video. Chat is few-to-few matrix text communication, as is instant messaging but with known people. Blogs are text broadcasts that also allow comment feedback. Online voting is matrix communication, as many communicate with many in one operation.

Computers allow "anytime (footnote 60), anywhere" communication for less effort, e.g. an email is easier to send than posting a letter. Lowering the message threshold means that more messages are sent (Reid et al, 1996). Email stores a message until the receiver can view it (footnote 61), but a face-to-face message is ephemeral, it disappears if you aren't there to get it. Yet being unable to edit the message sent makes sender state streams like tone of voice more genuine.

Linkage

Richness

Broadcast

Interpersonal

Matrix

Position

Footprint,
Flare,
Scoreboard,
Scream,

Posture,
Gesture,
Acknowledgement,
Salute,
Smiley

Show of hands,
Applause,
An election,
Web counter,
Karma system,
Tag cloud,
Online vote,
Reputation systems,
Social bookmarks

Document

Poster,
Book,
Web site,
Blog,
Online photo,

News feed,
Online review,
Instagram,
Twitter1

Letter,
Note,
Email,
Texting,
Instant message,
Social network2

Chat,
Twitter1,

Wiki,
E-market,
Bulletin board,

Comment system,
Advice board,
Social network2

Dynamic-media (Audio)

Radio,
Loud-speaker,
Record or CD,
Podcast,
Online music

Telephone,
Answer-phone,
Cell phone,
Skype audio

Choir,
Radio talk-back,
Conference call,
Skype conference call

Multi-media (Video)

Speech,
Show,
Television,
Movie, DVD,
YouTube video

Face-to-face conversation,
Chatroulette
Video-phone,
Skype video

Face-to-face meeting,
Cocktail party,
Video-conference,
MMORPG
Simulated world

1 Combines broadcast (text) and matrix (follow). 2 Combines interpersonal and matrix.

Table 24.9: Communication media by richness and linkage

Electronic communication was expected to just become richer, with video the anointed heir, but EBay’s reputations, Amazon’s book ratings, Slashdot’s karma, tag clouds, social bookmarks and Twitter aren't rich at all. Table 9 shows that computer communication evolved by linkage as well as richness. Computer chat, blogs, messaging, tags, karma, reputations and wikis are all high linkage but low richness.

Communication that combines richness and linkage is interface expensive, e.g. a face-to-face meeting has rich channels and matrix communication to give sender state information and resolve real time contentions like people talking at once by showing where others are looking. To do this online requires many video streams on a screen, but who then controls the interface? Does each person control their own, and ignore the rest, or does one person set a common interface? In audio-based tagging, a person speaking automatically makes their video central (Figure 17). The interface is common but it is group-directed, i.e. democratic. Gaze-based tagging is the same except that when people look at a person their window expands, as when many people use a link it gets bigger. It is in effect a group directed bifocal display (Spence and Apperley, 2012). Only when matrix communication is combined with media richness will online meetings start to match face-to-face ones.

Brian Whitworth and Adnan Ahmad. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0). See section "Exceptions" in the copyright terms below.

Brian Whitworth and Adnan Ahmad. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0). See section "Exceptions" in the copyright terms below.

Figure 24.17 A-B: Audio based video tagging

As video-phones are now easily available, why isn't video-phoning the norm? Perhaps it has disadvantages, like having to dress-up for a call or check the background before calling Mum. Some may prefer text to video precisely because it is less rich, if they don’t want to communicate. Computer communication isn't just about richness because communication isn't just about the message — there is the sender and receiver too.

24.3.9 Semantic streams

Communication goals can be classified by level as follows (Whitworth et al, 2000):

  1. Informational. The goal is to analyze information about the world and decide a best choice. This logical process is surprisingly fragile (Whitworth et al, 2000).

  2. Personal. The goal is to form relationships which are more reliable. Relating involves aturn-taking, mutual-approach process, to manage the emotional arousal evoked by the presence of others (Short et al, 1976). (footnote 62)

  3. Community. The goal is to stay "within" the group, as belonging to a community means being part of it, and so protected by it. Communities outlast friends.

Goal

Influence

Linkage

Questions

Analyze
(task information)

Informational influence, of the facts

Broadcast

What is right?
What is best?

Relate
(to other people)

Personal influence,
of other people

Interpersonal

Who do I like?
Who do I trust?

Belong
(to a community)

Normative influence,
of the community

Matrix

What is everyone doing?
Am I “in” the group?

Table 24.10: Human goals by influence and linkage

Table 10 shows how each goal maps to influence and linkage. Whether online or off, we analyze information, relate to others and belong to communities, so are subject to informational, personal and normative influence. The latter is based neither on logic nor friendship, e.g. patriotism is my country right or wrong, friendships or not. An individual may be influenced by task information, friend recommendations or community norms via different semantic streams. Semantic streams are people processing a physical signal in different ways to generate different meanings, where one physical message can at the same time convey:

  1. Message content. Symbolic statements about the literal world, e.g. a sentence.

  2. Sender state. Sender psychological state, e.g. an agitated tone of voice.

  3. Group position. Sender intent over many is a group intent, e.g. an election.

Human communication is subtle because one message can have multiple meanings and people respond to many semantic streams at once, e.g. a person leaving a party may say "I had a good time", but by tone imply the opposite. One can say 'I AM NOT ANGRY!" in an angry voice (footnote 63). What is less obvious is that a message can also indicate a position, or intent to act, e.g. saying "I had a good time" in a certain tone or body language can indicate an intention to leave a party. When a community acts, its citizens follow.

In the general model (Figure 18), physical level signals generate many semantic streams and influences. While face-to-face interactions allow multi-stream communication, computing tends to pick one type, e.g. email text gives content but not sender state. Online voting gives position but not comments. Technologies that operate at the community level use matrix or group-to-group communication, such as:

  1. The reputation ratings of Amazon and E-Bay are community-based product quality control and Slashdot does the same for content, letting readers rate comments so viewers can filter out low quality ones.

  2. Social bookmarks, like Digg and Stumbleupon, let users share link favorites, to see what the community is looking at.

  3. Tags are technology to increase the font size of links according to frequency of use. As people walk in forests on the paths trod by others, so we can follow the “web-tracks” of others on a browser screen.

  4. Twitter's follow function lets people see the leaders they like, and lets leaders broadcast ideas to followers.

The power of the computer is to allow matrix communication by millions and billions. What might a global referendum on current issues reveal? The Internet could tell us.

As for the future, in an Internet dominated by personal "apps", multi-user apps are an obvious next step, as are applications supporting many semantic streams, like Facebook friend voting. Given recent advances in connectivity, we can expect a “bite-back” in privacy demands, i.e. more small groups or “tight” communities that are harder to get in.

Cognitive processes in communication

Brian Whitworth and Adnan Ahmad. Copyright: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0). See section "Exceptions" in the copyright terms below.

Figure 24.18: Cognitive processes in communication

The World Wide Web is a system evolving. Its first level, an information library accessed by search tools, is well in place. The second, a medium for personal relations, is also well underway. The third, a civilized social environment, is the current and future challenge. Even a cursory study of Robert's Rules of Order will dispel any illusion that social dealings are simple (Robert, 1993). Socio-technology allows hundreds of millions of people to act together but we are still working out what "here comes everybody" (Robert, 1993). Socio-technology allows hundreds of millions of people to act together, but we are still figuring out what "Here comes Everybody" (footnote 64) means (Shirky, 2008). None of us is an island, as we link to mothers, fathers, brothers, sons, daughters, aunts, sisters, wives, grandmothers, uncles, grandfathers husbands and friends, so when social others talk, even hardened dictators listen. This is a good thing.

24.3.10 Discussion questions

Research selected questions from the list below. If you are reading this chapter as part of a class - either at university or a commercial course - you can research these questions in pairs and report back to the class, with reasons and examples.

  1. Why can't technologists leave the social and ethical questions to non-technologists? Give examples of IT both helping and hurting humanity. What will decide, in the end, whether IT helps or hurts us overall?

  2. Compare central vs. distributed networks (Ethernet vs. Polling). Compare the advantages and disadvantages of centralizing vs. distributing control. Is central control ever better? Now consider social systems. Of the traditional socio-technical principles listed, which ones distribute work-place control? Compare the advantages and disadvantages of centralizing vs. distributing control in a social system. Compare governance by a dictator tyrant, a benevolent dictator and a democracy. Which type are most online communities? How might that change?

  3. Originally, socio-technical ideas applied social requirements to work-place management. How has it evolved today? Why is it important to apply social requirements to IT design? Give examples.

  4. Illustrate system designs that apply: Mechanical requirements to hardware (footnote 65). Informational requirements to hardware (footnote 66). Informational requirements to software. Personal requirements to hardware (footnote 67). Personal requirements to software. Personal requirements to people. Community requirements to hardware. Community requirements to software. Community requirements to people. Community requirements to communities. Give an example in each case. Why not design software to mechanical requirements?

  5. Is technology the sole basis of modern prosperity? If people suddenly stopped trusting each other, would wealth continue? Use the 2009 credit meltdown to illustrate your answer. Can technology solve social problems like mistrust? How can social problems be solved? Can technology help?

  6. Should an online system gather all the data it can during registration? Give two good reasons not to gather or store non-essential personal data. Evaluate three online registration examples.

  7. Spam demonstrates a socio-technical gap, between what people want and what technology does. How do users respond to it? In the “spam wars”, who wins? Who loses? Give three other examples of a socio-technical gap. Of the twenty most popular third-party software downloads, which relate to a socio-technical gap?

  8. What is a legitimate government? What is a legitimate interaction? How do people react to an illegitimate government or interaction? How are legitimacy requirements met in physical society? Why won’t this work online? What will work?

  9. What is the problem with “social engineering”? How about “mental engineering” (brainwashing)? Why do these terms have negative connotations? Is education brainwashing? Why not? Explain the implications of all this for STS design.

  10. For a well known STS, explain how it supports, or not, the eight proposed aspects of community performance, with screenshot examples. If it doesn't support an aspect, suggest why. How could it?

  11. Can one own something but still let others use it? Can a community be both free and ordered? Can people compete and cooperate at the same time? Give a physical and online examples. How are such tensions resolved? How does democracy reconcile freedom and order? Give examples in politics, business and online.

  12. What is community openness for a nation? For an organization? For a club or group? Online? Why are organizations that promote based on merit more open? Illustrate technology support for merit-based promotion in an online community.

  13. Is a person sending money to a personal friend online entitled to keep it private? What if the sender is a public servant? What if it is public money? Is a person receiving money from a personal friend online entitled to keep it private? What if they are a public servant?

  14. What is social synergy? What destroys it? How do communities encourage synergy? How do they prevent its destruction? How do trust and synergy relate? Give physical and electronic examples.

  15. Give five examples of defections in ordinary life. What happens if everyone defects? Give five online examples, and for two specify how technology lowers defections.

  16. Would you prefer to be a middle class citizen now or a lord three hundred years ago? Consider factors like diet, health, clothes, leisure, travel, etc. Where did the lord's wealth mainly come from? Where does the power of your salary to buy many things come from today? What is the principle and how does it apply online?

  17. What is a social dilemma? Give three physical examples from your experience. Why can't individuals solve them? How are they solved? Give three online social dilemmas. How are they to be solved? Relate this to socio-technical design.

  18. What happens if no-one in a group suggests anything? What happens if you suggest things in a group? How can groups manage this? Answer the same questions for volunteering. Give examples from your experience. What percentage of online users are "lurkers", who look but don't post? Review a popular board you haven't used before. What stops you contributing? Add something anyway. How could the board increase participation?

  19. Is ethics idealism or pragmatism? Explain the statement: Personal ethics is community pragmatics. Consider a thief who steals a wallet and isn't caught. List the thief's gains and the victim's losses. What is the net community result? What happens if everyone in a community steals, i.e. takes but does not give? Generalize to online cases. How then does STS design relate to ethics?

  20. Why is synergy more important for larger communities? Why is it especially important for socio-technical systems? How can technology help increase synergy? Report the current estimated sizes of popular socio-technical systems. Clarify what is exchanged, who interacts and the synergy.

  21. What is communication? What is meaning? What is communication performance? How can media richness be classified? Is a message itself rich? Does video always convey more meaning than text? Can rich media deliver more communication performance? Give online and offline examples.

  22. What affects communication performance besides richness? How is it classified? Is it a message property? How does it communicate more? Give online/offline examples.

  23. If media richness and linkage both increase communication power, why not have both? Describe a physical world situation that does this? What is the main restriction? Can online media do this? What is, currently, the main contribution of computing to communication power? Give examples.

  24. What communication media type best suits these goals: telling everyone about your new product; relating to friends; getting group agreement? Give online and offline examples. For each goal, what media richness, linkage and anonymity do you recommend. You lead agile programming team spread across the world: what communication technology would you use?

  25. State differences between the following media pairs: email and chat; instant messaging and texting; telephone and email; chat and face-to-face conversation; podcast and video; DVD and TV movie; wiki and bulletin board. Do another pair of your choice.

  26. How can a physical message convey content, state and position semantic streams? Give examples of communications that convey: content and state; content and position; state and position; and content, state and position. Give examples of people trying to add an ignored semantic stream to technical communication, e.g. people introducing sender state data into lean text media like email.

  27. Can a physical message generate many information streams? Can an information stream generate many semantic streams? Give examples. Does the same apply online? Use how astronomical or earthquake data is shared online to illustrate your answer.

  28. You want to buy a new cell-phone and an expert web review suggests model A based on factors like cost and performance. Your friend recommends B, uses it every day, and finds it great. On an online customer feedback site, some people report problems with A and B, but most users of C like it. What are the pluses and minuses of each influence? Which advice would you probably follow? Ask three friends what they would do.

  29. What is the best linkage to send a message to many others online? What is the best linkage to make or keep friends online? What is the best linkage to keep up with community trends online? List the advantages and disadvantages of each style. How can technology support each of the above?

  30. Explain why reputation ratings, social bookmarks and tagging are all matrix communication. In each case, describe the senders, the message, and the receivers. What is the social goal of matrix communication? How exactly does technology support it?

  31. Give three online leaders searched by Google or followed on Twitter. Why do people follow leaders? How can leaders get people to follow them? How does technology help? If the people are already following a set of leaders, how can new leaders arise? If people are currently following a set of ideas, how can new ideas arise? Describe the innovation adoption model. Explain how it applies to "viral" videos?

24.4 Part 4: An example: Online rights

A right is a community permission to act

Legitimacy analysis specifies community requirements for technology design. Previous examples are polite computing (Whitworth and Liu, 2008) and channel email (Whitworth and Liu, 2009). This section proposes an access control model based on these social requirements:

  • Ownership. To reduce object conflicts.

  • Freedom. To own oneself, to not be a slave.

  • Fairness. That social consequences reflect action contributions (Rawls, 2001). (footnote 68)

  • Privacy. To control the release of personal information to others.

  • Transparency. A democratic citizen's right to know how they are governed.

If the new user of computing is society, we must specify its requirements.

24.4.1 Access control

In computing, decision support systems recommend decisions, access control systems permit them and control systems carry them out. Access control began with multi-user computing as users sharing the same system came into conflict (Karp et al., 2009). Traditional access control systems (ACSs) use a subject by object access permission matrix to allocate rights (Lampson, 1969). As computing evolved, ACS logic offered local access control for distributed systems and roles for many person systems. With these variants, the matrix approach has worked for military (Department of Defense, 1985), commercial (Clark and Wilson, 1987), organizational (Ferraiolo and Kuhn, 2004), distributed (Freudenthal et al, 2002), peer-to-peer (Cohen, 2003) and grid environment (Thompson et al, 1999) applications.

Today, access control in social networks (SNs) is more about access than control. The permission matrix for friend interactions increases geometrically, not linearly, with group size, so for hundreds of millions of people the possible connections are astronomical. Each account also adds hundreds or thousands of photos or comments a year. Finally, each person wants the sort of domain control previously reserved only for system administrators. Social networkers want local access control, not just to read, write and execute files (Ahmad and Whitworth, 2011), but to control their own social structure, without asking a central authority for permission (Sanders and McCormick, 1993),e.g. to restrict a photo to family or friends. Social networks vastly increase ACS complexity, as millions of users want all rights to billions of resources, plus rights to re-allocate rights. They are the perfect storm for the traditional ship of access control.

The current rules of social network interaction are based on designer intuitions rather than formal models, so they vary between systems and over time, with public outrage the only check. There is no agreed scheme for allocating permissions to create, edit, delete or view object entities, let alone manage roles. The aim here is to fill that gap, to develop a socio-technical access control model that is legitimate, efficient, consistent and understandable.

24.4.2 Rights

Communities, by norms, laws or culture, grant citizens rights, or social permissions to act. Rights reduced physical conflict, as parties who agree on rights don't have to fight. This moved the conflict from the physical level to the informational or legal level (footnote 69). Physical society expresses rights in terms of ownership (Freeden, 1991), so specifying who owns what online can specify rights in a way that designers can support and users can understand (Rose, 2000). This doesn't mechanize online interaction, as rights are choices not obligations, e.g. the right to sue doesn't force one to sue. Legitimate access control defines what online actors can do not what they must do.

Traditional design refers to software "users", as if they were on a drug, but Facebook's users aren't part of the software. Socio-technology talks of actors who switch software, not passive users. As shops can see "a sale" or "a customer", IT designers see a user or an actor.

An actor is a system able to act independently of outer conditions, i.e. to act not react. Actors can initiate acts, which implies some internal choice or autonomy (footnote 70). A program that always responds the same way to the same input has no autonomy, so can't itself be an actor (footnote 71).

A person is an actor with an ego-self and a citizen is a person who can be held to account (footnote 72). To hold to account, to link consequences to people, is fundamental to all social interaction. (footnote 73) By accountability, communities reward those who benefit it and punish those who harm it. (footnote 74)While philosophers argue over free will, all communities consider citizens accountable and govern accordingly. Those deemed not so, the criminal or insane, are in the care of those who are. A community holds citizens (footnote 75)to account for the effects of their acts not on themselves and on others. Accountability is the over-arching social requirement, without which communities fail. It only applies to people, e.g. in car accidents the driver is held to account not the car, as the car has no personal self to be accountable. (footnote 76)

Rights arise when social requirements manifest as personal cognitions, which manifest as informational rules, which manifest as action directives. In physical communities, police and courts direct citizens to follow laws, written by judges who understand justice. Online, the same applies, but in this architecture code is the law, police, judge, jury and prison guard. To not be corrupt, systems must be legitimate by design.

The following derives informational rights from community requirements stated on the personal level. In information terms, a right is an actor (A) applying an operation (O) to an entity (E):

Right = (Actor, Entity, Operation) = (A, E, O)

Rights can be stored as (Actor, Entity, Operation)triplets, where an actor is an accountable entity or their agent, an entity is any object, actor (footnote 77) or right, and an operation is any one available to the entity. A right transmitted or stored is often called a permission.

24.4.3 pecification

Socio-technical systems can be modeled as entities and operations:

  1. Entities. Stored as static information, with properties.

    1. Actor. An entity that can participate in a social interaction. (footnote 78)

      1. Persona. Represents an accountable offline person or group.

      2. Group. A set of personae acting as one. (footnote 79)

      3. Agent. An actor that represents another actor.

    2. Object. Conveys information and meaning.

      1. Item. A simple object with no dependents, e.g. a bulletin board post.

      2. Space. A complex object with dependents, e.g. a bulletin board thread.

    3. Right. A system permission for an actor to operate on an entity.

      1. Simple rights. Rights to act on object or actor entities.

      2. Meta-rights. Rights to act on right entities, e.g. delegate.

      3. Role. A variable right (a set of rights).

  2. Operations. Stored as a program or method that processes entities.

    1. Null operations don't change the target entity, e.g. view (footnote 80), enter.

    2. Use operations change the target in some way, e.g. edit, create.

    3. Communication operations transfer data from sender(s) to receiver(s), e.g. send.

    4. Social operations change a right or role, e.g. delegate.

Link operations are discussed elsewhere (Whitworth and Bieber, 2002).

24.4.4 The system itself

The information system itself is the first entity, owned by the system administrator (SA), who is the first user. A tyrant SA might alter posts or votes by whim but a benevolent dictator, Plato's best form of rule, gives citizens rights. As even benevolent dictators die, humanity invented democracy,to reduce dynasty transfer battles. (footnote 81) Yet no online system we know of votes for its system administrator, e.g. even Wikipedia isn't a democracy.

An ACS controls at the informational level. If it is not to be in charge, it must allocate all use rights to people who are accountable, giving the ACS operational principle:

P1. All non-null entity rights should be allocated to actors at all times.

So every only entity should be owned, ultimately, by a person.If this is not true, an access control system must at some point respond to an access request from itself. Yet as an information system, it has no self to act socially. Hence rights aren't added or deleted, but allocated and re-allocated.

24.4.5 Persona

An online persona represents an offline party, e.g. an avatar, profile, mail account, wall or channel can represent an offline person, group or organization. An online persona is activated by a logon operation, which equates it to the offline party. An online computer agent can act for a group, like installation software for a company, but social acts must ultimately trace back to people and online is no different. (footnote 82) If an installation misleads, we sue company directors not software. (footnote 83)

Who owns a persona? Open systems let people self-register, to create their personae. If freedom applies online, one should own one's online self, but some systems don't permit this. Can you delete a Wikipedia or Wordpress profile? (footnote 84) The freedom requirement gives the ACS principle:

P2. A persona should be owned by itself.

Some complexities are that a persona can be:

  1. Abandoned. HotMail accounts inactive for over 90 days are permanently deleted, i.e. if not used they "starve and die."

  2. Transferred. One can permanently pass a persona to another, along with its reputation. (footnote 85)

  3. Delegated. One can ask an agent act on one's behalf, e.g. a proxy vote.

  4. Orphaned. If the person behind a persona dies, their will is physically respected, but online programs act as if death doesn't exist, e.g. one can get an eerie Facebook message from a person the day after going to his funeral. As in a few decades Facebook will represent millions of obituaries, we need online wills.

Table 11 below shows a summary of persona access rights

Persona

View

Delete

Edit

Ban

Create

SA

Owner

1

Table 24.11: Persona access rights. 1 Delegated by the SA.

24.4.6 Object entities

Object entities convey meaning by evoking cognitive processing, e.g. a family photo.

Items. A simple object with no dependents, e.g. a board post. It can be deleted, edited or viewed. In the object hierarchy tree, items are like leaves. An item can be a:

  1. Comment: Items whose meaning depends on another, e.g. "I agree" makes no sense alone.

  2. Message: Items with sender(s) and receiver(s), e.g. an email.

  3. Vote: Items that convey a position, a choice from a response set.

Spaces. As leaves need branches, so items need spaces, e.g. an online wall that accepts photos is an information space - a complex object with dependents. It can be deleted, edited or viewed like an item, but can also contain objects, e.g. a bulletin board. Spaces within spaces give object hierarchies, with the system itself the first space.

A space is a parent to the child entities it contains, who depend on it to exist. So deleting a space deletes its contents, e.g. deleting a board deletes its posts. The move operation changes the parent space of an object. The enter space operation shows the objects on display in it. As every entity is in the system space:

P3: Every entity has a parent space, up to the system space.

If every entity has a parent space (footnote 86), its ancestors are the set of all spaces that contain it, up to the system itself, the first ancestor. The offspring of a space are any child objects it contains, their children, etc. So all entities have owners and ancestors, and any space can have offspring.

24.4.7 Operations

Entity Type

Operations

Any entity

View

1. Social entity

..., Delete, Edit

a. Persona

..., Logon

b. Agent

..., Delegate

c. Group

..., Join

2. Object entity

..., Delete, Edit, Move

a. Item

..., ConvertToSpace

b. Space

..., Create, Enter

3. Right entity

..., Allocate, Re-allocate

a. Role

..., Friend, Ban

Table 24.12: Operation sets by entity type

Operations are actor initiated methods on information entities subject to access control.

Operation sets. Operations can be clustered for access control purposes, e.g. delete flags an entity for destruction, undelete reverses that, and destroy kills it permanently. An ACS that can manage one can manage all. Likewise, edit alters entity values, append extends them, version edits with backup, and Wikipedia's revert is the inverse. Again, variants of a set present the same ACS issues, so to resolve one is to resolve all.

Create. While edit changes existing entity values, create adds a new entity, e.g. creating a Wikipedia stubfor others to edit. Duplicate is a variant of create. Table 12 shows the operation sets for various entity types, where create is an act on a space - see Section 13.10.

View. Operations like view are null acts that don't change their informational level target but viewing another is a personal level act. In social facilitation, knowing one is being looked at energizes the viewed party (Geen and Gange, 1983). Viewing someone affects them because success in a social group depends very much on how others see you. Privacy, to control information about ourselves, is important for the same reason. The act of viewing can have great effect on the community level, e.g. a "viral" online video makes others want to view it too.

The right use an entity implies accountability, but as one can't use what one can't see, use rights imply view rights, giving the ACS operational principle:

P4: Any right to use an object implies a right to view it.

Communication. In a simple communicative act, a sender creates a message that a receiver views. It is by definition a joint act where both parties have choice. Hence communication should be by mutual consent. Privacy is the right to remain silent, to not communicate and to not receive messages. In the physical world, people say "Can I talk to you?" because communication is by permission. Some online systems however, like email, don't recognize this. They give anyone the right to send a message to anyone, whether they will or no, and so invite spam. In contrast, in Facebook, chat, Skype and Twitter, one needs prior permission to message someone. The details of legitimate communication, where a channel is opened by mutual consent before messages are sent, are given in (Whitworth and Liu, 2009). The resulting ACS operational principle is:

P5: Any communication act should have prior mutual consent.

The evolution of telephony illustrates a communication evolution. At first phones just transmitted information — the phone rang and one answered, not knowing who was calling. This allowed telemarketing, the forerunner of spam. Now cell phones show caller id by default, so one can choose to respond, i.e. it is more mutual. Yet we still have to personally type in contact list names, while social networks synergize - we each type in our on name then let others add it to their contact list. Cell phone companies could use this synergy but like the makers of TV remotes, are locked into a one-level mind-set (footnote 87).

24.4.8 Roles

Roles, like parent, friend or boss, simplify rights management by covering many cases, but still remain understandable, so people can review, evaluate and accept them. They are equally useful online, e.g. Wikipedia citizens can aspire to steward, bureaucrat or sysop roles by good acts. Slashdot’s automated rating system offers readers the moderator role (Benkler, 2002) if registered (not anonymous), regular users (for a time) with positive “karma” (how others rate their comments). Every registered reader has five influence points to spend on others as desired over a three day period (or they expire). In this role democracy, high rated commenter's get more karma points and so more say on who is seen. The technology lets a community democratically direct its governance.

In information terms, a role is a variable rights statement, e.g. a friend role is a set of people with extra permissions. Roles are generic rights, giving the ACS operational principle:

P6: A role is a right expressed in general terms, as a pointer or set.

Roles are the variables of social logic:

Role = (Actor, Entity, Operation)

The bolding indicates a variable, e.g. the owner role can be generally defined as any party who has all rights to an entity:

RoleOwner = (Owner , Entityi , OperationAll )

Making a person the owner just allocates the Owner pointer to their persona. Roles are flexible, e.g. the friend role lets one change who can see photos posted on a wall:

RoleFriend = (Friend , EntityWall , OperationView )

where Friend is a persona set. To "friend" another is to add them to this role set, and to unfriend is to remove them. As a variable can be undefined, so a role can be empty, i.e. a null friend set. To "friend" is spoken of as act on a person, but it doesn't change the persona entity, so is really an act upon a local role. You decide your friends so don't need permission to friend anyone. Equally to ban a person adds them to the denied entry role for your space. If banning were an act on another's persona it, would need their consent. That it is an act on my role gives the ACS principle:

P7. A space owner can ban or give entry to a persona without its owner's permission.

Re-allocating actors isn't the only way to alter a role. By definition, one can change a role's:

  1. Actor. The role actor set.

  2. Entity. The entities it applies to.

  3. Operation. The operations it allows.

For example, a friend role could limit the objects it applies to, with some photos for family only. It could also allow adding comments to photos or not. Few current systems fully use the power of local roles, e.g. social networks could let actors define an acquaintance role, with fewer rights than a friend but more than the public, or an extended family role.

24.4.9 Meta-rights

Owning an object is the right to use it:

RightUser = R (User, Entityi, OperationUse) ,

but a right as an entity can also be acted on, i.e. re-allocated. A meta-right is the right to re-allocate a right. In formal terms:

RightMetaRight = R (Owner, RightOwn , OperationAllocate ) ,

where the entity acted on is a right. An owner with all rights to an entity also has its meta-rights, i.e. the right to change its rights. Paradoxically, fully owning an entity implies the right to give it away entirely. Reachability (footnote 88) requires meta-rights to be absolute, i.e. there are no meta-meta-rights. The gives the ACS operational principle:

P8. A meta-right is the right to allocate any entity right, including itself.

Previously to own an entity was to have all rights, but giving away use rights while keeping meta rights is still ownership, e.g. renting an apartment gives a tenant use rights, but the landlord still owns it, as they keep the meta-rights. The tenant can use it but the owner says who can use it.

24.4.10 The act of creation

To create an object from nothing is as impossible in an information space as it is in a physical one. Creation cannot be an act upon the object created, which by definition doesn't exist before it is created. An actor can't request ACS permission to create an object that doesn't exist. To create an information object, its data structure must be known, i.e. exist within the system. So creation is an act upon the system, or in general, an act on the space immediately containing the created object, giving the ACS operational principle:

P9. Creation is always an act on a space, up to the system space.

This rule is well defined if the system itself is the first space. Creating is an act upon a space because it changes the space that contains the created object. If creation is an act upon a space, the right to create in a space belongs to initially the space owner:

RightCreate = R (SpaceOwneri , Spacei , OperationCreate )

The right to create in a space initially belongs to its owner, who can delegate it to others. The logic generalizes well, e.g. to add a board post, YouTube video or blog comment requires the board, video, or blog owner's permission. One can only create in a space if its owner permits. Now an ACS can be simply initialized as a system administrator owning the system space with all rights, including create rights. The SA must then give rights away for a community to evolve. If the SA only delegates rights, they can always be taken back.

Creator ownership. Object creation is a simple technical act, but a complex social one, e.g. how are newly created entity rights allocated? The 17th Century British philosopher Locke argued that creators owning what they create is fair and increases prosperity, whether a farmer's crop, a painter's painting or a hunter's catch (Locke, 1963). If the creator of something chooses to sell or give it away, that is another matter. A community that grants producers the right to their products encourages creativity. Conversely, why produce for others to own? This gives the ACS operational principle:

P10. The creator of new entity should immediately gain all rights to it.

Creator ownership conveniently resolves the issue of how to allocate new object rights — they go to its creator, including meta-rights. This isn't what must happen, as a program can act any way it likes, e.g. to give all created object ownership to the system administrator. Creator ownership is a social requirement not a technical one, i.e. a condition of social success not a logical necessity. Such conditions can however be socio-technical axioms.

Creation conditions. Are when a space owner partially delegates creation, limiting:

  1. Object type. The object type created, e.g. the right to create a conference paper isn't the right to create a mini-track space.

  2. Operations. The operations allowed on created objects, e.g. blog comments aren't usually editable once added, but ArXiv lets authors edit publications as new versions.

  3. Access. Who can access created objects, e.g. YouTube gives contributors exclusive edit rights, but Wikipedia lets anyone edit any creation.

  4. Viewing. Who can view created objects, e.g. bulletin boards let you see what others submit, but conferences in the paper review phase don't.

  5. Editing. The field values of a created object, e.g. date added may be non-editable. The space owner may also set field default values.

A space owner can delegate creation rights as needed, e.g. to set vote results to only show to people who have voted, to avoid bias.

Transparency. Yet fairness dictates a creator's right to know creation conditions in advance. In general, transparency is the right to view rights that affect you. So those who create in a space should know the creation rules in advance. The ACS principle is:

P11. A person can view in advance any rights that could apply to them.

Successful socio-technical systems like Facebook, YouTube and Wikipedia do this.

In sum, a space owner can delegate the right to create in whole or part, but must disclose creation conditions up front so potential creators can decide if creation is worth it.

24.4.11 Role allocations

When an entity is created in a space, the system can assign the following roles:

  • Owner. Has meta rights to the entity.

  • Parent. The containing space owner.

  • Ancestor. Ancestor space owners, with the SA the first ancestor.

  • Offspring. The owners of any entities contained in a space.

  • Local public (space only). Actors who are permitted to enter the space.

A space owner owns its local public role can define what others can do or see in the space:

RoleLocalPublic = (LocalPublic , Spacei , OperationAny )

It can be set manually, as friends are allocated, or point to a GlobalPubliclist.

Ancestor role. A conference paper's ancestors are its mini-track, track and conference chairs. An entity, being part of the space it exists in, must be visible to the owner of that space. Privacy doesn't contradict this, as it refers to the display of personal information not created object information. Generalizing, the ACS principle is:

P12. A space owner should have the right to view any offspring.

So the ancestor role for any entity is given view rights to it:

RoleAncestor = (Ancestors, Entityi , View)

For example, a paper posted on a conference mini-track should be visible to track and conference chairs, but not necessarily to other track or mini-track chairs. Ancestors can be notified of new offspring, as an owner can be notified of new ancestors.

Offspring role. An entity created in a parent space was by definition created by an actor with the right to enter that space. If a space bans the owner of an object in it, the object is disowned, contradicting P1. A child object's owner must enter its space to act on it, even if they can't do anything else. By extension, they can also enter any ancestor space. This doesn’t imply any other rights. The ACS principle is:

P13. An entity owner should be able to enter any ancestor space.

e.g. adding a mini-track paper should let one enter the track and conference spaces. Any space should allow its offspring owners to enter it:

RoleOffspring = (Offspring , Space , Enter)

Table 13 summarizes the basic access rights for entities and spaces.

Entity

View

Delete

Edit

Display

Allocate

Ancestor

Parent

√ 1

Owner

√ 2

LocalPublic

1,2

Space also

Enter

Create

Ancestor

Owner

LocalPublic

√1

√1

Table 24.13: Entity and space access rights. 1 As allocated by the owner. 2 As allocated by the parent.

24.4.12 The act of display

To display an object is to let others view it. The right to display isn't the right to view, e.g. viewing a video online doesn't let you display it on your web site (footnote 89). Display is the meta-right to view, i.e. the right to give the right to view an object to others, e.g. privacy is the meta right to display the persona object. As people have private numbers in a phone book, so Facebook or Linkedin persona are displayed to the public by owner consent. The phone company that owns a phone book list can also choose not to display a listing, giving the ACS principle:

P14. Displaying an entity in a space requires both persona and space owner consent.

Displaying an item in a space is its owner giving display rights to the space owner. For example, to put a physical notice on shopkeeper notice board involves these steps:

  1. Creation. Create a notice. You own it and can still change it, or even rip it up.

  2. Permission. Ask the board owner if it can be posted on the notice board.

  3. Post. The board owner either vets notices in advance or lets people post themselves.

  4. Removal. As the notice is displayed by mutual consent, either can remove it. A poster can also ask that it be removed..

The shopkeeper's right to take a notice down isn't the right to destroy it, because he or she doesn’t own it. Nor can he or she alter (deface) notices on the board.

The same social logic applies online. Create a video on YouTube gives you view rights to it, but it isn't yet displayed to the public, as this right belongs to the space owner. Giving the right to display YouTube video is like giving a notice to a shopkeeper to post on their board. The item owner gives the space owner the right to display it in their space. In general, to display any video, photo or text in any online space requires mutual consent, as one party gives another the right to display, giving the ACS principle:

P15. An entity owner must give view meta-rights to a space owner to display in that space

Display result

Space owner

Accept

Reject

Object
owner

Submit

YES

NO

Withdraw

NO

NO

Table 24.14: An open display interaction

Display as a rights transaction is the basis of all publishing. Table 14 shows how the mutual interaction between authors and publishers, or object owners and space owners, operates. A space can delegate display rights, to let creators display as desired, e.g. YouTube. Or it may vet items before display and reject some, e.g. ArXiv, which also lets authors withdraw submissions. Bulletin boards let anyone submit but not withdraw, and reserve the right to moderate postings, i.e. reject later. Authors who publish must give all rights the publisher. An author can't "un-publish" a paper, but then again, neither can the publisher (footnote 90). Usually the right to publish a work is given once only, but some publishers contract the right to do so many times, e.g. publishing one IGI book chapter led to its re-publication in other collections without author permission (footnote 91) (Whitworth and Liu, 2008).

Entity creation. Technically, creating an entity is simple - the program just creates it - but socially adding into another's space isn't a one-step act. Adding a YouTube video involves:

  1. Registration. Create a YouTube persona.

  2. Entry. Enter YouTube (not banned).

  3. Creation. Create and upload a video.

  4. Edit. Edit video title, notes and properties.

  5. Submit. Request YouTube to display the video to their public.

  6. Display. The public sees it and can vote or comment.

YouTube lets anyone registered in the public role (1) enter their space (2) and create a video, by uploading or recording, which they own (3). They can view it in private and edit details (4). At this point, the video is visible to them and administrators, but not to the public. They can still delete it. It is then submitted to YouTube for display to its public (5). This occurs quickly as display rights are delegated (6). To create, edit and display a video are distinct steps. YouTube can still reject videos that fail its copyright or decency rules. This isn't a delete, as the owner can still view, edit and resubmit it. In contrast, a technology based design that lets space owners delete videos at will discourages participation, because people could waste their effort.

Consistency. For the above logic to be consistent, it should also apply when the video itself a space for dependent comments and votes. Indeed it is, as video owners have to choice to allow comments or votes just as YouTube had to right to accept their video (Figure 19). That YouTube gives the same rights to others as it takes for itself is a key part of its success and a basic principle of socio-technical designs.

YouTube video rights

Author/Copyright holder: Unknown (pending investigation). Copyright terms and licence: Unknown (pending investigation). See section "Exceptions" in the copyright terms below.

Figure 24.19: YouTube video rights

24.4.13 Re-allocating rights

The right to re-allocate rights is part of social interaction. It allows socio-technical systems to evolve from an initial state of one administrator with all rights, to a community sharing rights. Use and meta rights can be re-allocated, as follows:

  1. Transfer. Re-allocate all rights, including meta-rights. Rights are irrevocably given to the new owner, e.g. after selling a house, the old owner has no rights to it.

  2. Delegate. Re-allocate use rights but not meta-rights. It can be reversed, e.g. renting.

  3. Divide. A right divided among an actor set requires all to agree to permit an act, and any party can stop it, e.g. couples who jointly own a house.

  4. Multiply. A right multiplied across an actor set lets them all exercise it as if they owned it exclusively, e.g. couples who severally share a bank account.

Allocated by

Allocated to

Meta- rights

Use rights

Meta- rights

Use rights

Transfer

Delegate

Divide use

½ √

½ √

Divide all

½ √

½ √

½ √

½ √

Multiply use

Multiply all

Table 24.15: Results use and meta rights re-allocations

Dividing a right means that all must agree to it, while multiplying one means that any party alone can activate it. This isn't just splitting hairs, as if a couple owns house jointly, both must sign the sale deed to sell it, but if they own it severally, either party can sell it and take all the money. Re-allocating rights applies to many social situations, e.g. submitting a paper online can transfer all rights to a primary author, or also let them delegate rights to others, or divide rights so all authors must confirm changes, or multiply rights to all authors. Table 15 shows the resultant states. Each has different consequences, e.g. multiplying the edit right is risky but invites participation, while dividing it is safe but reduces contributions.

Delegation. Delegation, by definition, doesn't give meta-rights, so a delegatee can't pass rights on. Renting an apartment gives no right to sub-let, and lending a book doesn't give the right to on-lend it. It isn't hard to show that if delatees delegate, accountability is diluted. If one loans a book to one who loans it to another who loses it, who is accountable? This gives the operational principle:

P16. Delegating doesn't give the right to delegate.

Allocating use rights to an existing object makes the target person accountable for it, so it requires consent, e.g. one can't add a paper co-author without agreement. The principle is:

P17. Allocating existing object use rights to a person requires their consent.

An ACS might ask: "Bob offers you edit rights to 'The 2012 Company Plan', do you accept?" In contrast, rights to null acts, like view or enter, or to acts like create, can be allocated without consent because they imply no accountability:

P18. Allocating null rights to existing objects, or the right to create, requires no consent.

So space owners can freely delegate entry, view and create rights to anyone.

Social networks. Social networks currently send messages like:

"X wants to be friends with you"

In this tit-for-tat social trade: X offers to make you a friend if you make them one, i.e. it is a social trade. Yet by P7, one can befriend another without their permission. (footnote 92) If the software allowed it, we might get messages like:

"X considers you a friend "

This is giving friendship, not trading it. As one can love a child unconditionally, even if they don't return the favor, so friendship needn't be a commercial transaction.

For a social network to consider the friends of my friends also my friends contradicts P16. As liking someone doesn't guarantee that one will like their friends, so making a friend shouldn't reset my friend list. This illustrates a technical option that failed because it had no social basis.

24.4.14 Implementation

Traditional access control enforcement is done by a security kernel mechanism. A security kernel is a trusted software module that intercepts every access request call submitted to a system and decides if it should be granted or denied, based on some specified access policy model. Usually, a centralized approach is used, so one policy decision point handles all resource requests. The user sees either an executed action result or a permission denied message. SNSs have millions of users so centralized or semi-decentralized certificates are a bottle neck. This plus the social need for local ownership by content contributors suggests a strategy of distributed certificates to implement the ACS policy model outlined here. Allowing local policy decision points to handle resource requests also ensures local user control over resources. If distributed certificates are stored in the stakeholder’s namespace, only he or she can access and modify them (Figure 20).

Distributed access control model architecture

Author/Copyright holder: Unknown (pending investigation). Copyright terms and licence: Unknown (pending investigation). See section "Exceptions" in the copyright terms below.

Figure 24.20: Distributed access control model architecture

24.4.15 Summary

A legitimate ACS model can manage rights by assigning owner, parent, ancestor, offspring, and local public roles to objects and spaces. The ACS axioms deduced are:

  1. All non-null entity rights should be allocated to actors.

  2. A persona should be owned by itself.

  3. Every entity has a parent space, up to the system space.

  4. Any right to use an object implies a right to view it.

  5. Any communication act should have prior mutual consent.

  6. A role is a right expressed in general terms, as a pointer or set.

  7. A space owner can ban or give entry to a persona without its owner's permission.

  8. A meta-right is the right to allocate any entity right, including itself.

  9. Creation is always an act on a space, up to the system space.

  10. The creator of new entity should immediately gain all rights to it.

  11. A person can view in advance any rights that could apply to them.

  12. A space owner should have the right to view any offspring.

  13. An entity owner should be able to enter any ancestor space.

  14. Displaying an entity in a space requires both persona and space owner consent.

  15. To display an entity in a space, the entity owner gives view meta-rights to the space owner.

  16. Delegating doesn't give the right to delegate.

  17. Allocating existing object use rights to a person requires their consent.

  18. Allocating null rights to existing objects, or the right to create, requires no consent.

The above are social requirements not technical necessities, aiming at social sustainability. We are in the process of formalizing this model as a social interaction standard for any socio-technical system.

24.4.16 Discussion questions

Research selected questions from the list below. If you are reading this chapter as part of a class - either at university or a commercial course - you can research these questions in pairs and report back to the class, with reasons and examples.

  1. What is access control? What types of computer systems use it? What don't? How does it traditionally work? How do social networks challenge this? How has access control responded?

  2. What is a right in human terms? Is it a directive? How are rights represented as information? Give examples. What is a transmitted right called? Give examples.

  3. What is the difference between a user and an actor? Contrast user goals and actor goals. Why are actors necessary for online community evolution?

  4. Is a person always a citizen? How do communities hold citizens to account? If a car runs over a dog, is the car accountable? Why then is the driver accountable? If online software cheats a user, is the software accountable? If not, who is? Give an example. If automated bidding programs crash the stock market and millions lose their jobs, who is accountable? Can we blame technology for this?

  5. Contrast an entity and an operation. What is a social entity? Is an online persona a person? How is a personae activated? Is this "possessing" an online body? Is a persona "really" you? If a program activates a persona, is it an online zombie? What online programs greet you by name? Do you like that? If an online banking web site welcomes you by name each time, does it build up a relationship? Who are you relating to?

  6. Estimate how many hours a day you interact with technology. Be honest. Of those, how many are with online programs vs. people? Which do you prefer? Are any online programs your friend? Try out mobile phone help you can converse with, like Siri. Ask it to be your personal friend and report the conversation. If AI improved, would you like a personal AI friend?

  7. Must all rights be allocated? What rights must be? Why? What manages online rights? Are AI programs accountable for rights allocated to them? In the USS Vincennes tragedy, was the computer program that shot down the Iranian civilian airliner held to account? Why not? What caused the error? What changed afterwards?

  8. Who should own a persona and why? For three STSs, create a new persona, use it to connect, try to edit it, then to delete it. Compare what properties you can and can’t change. If you delete it entirely, what remains? Can you resurrect it? Describe two ways to join an online community. Which is easier? More secure?

  9. Describe, with examples, current technical responses to the social problems of persona abandonment, transfer, delegation and orphaning. What do you recommend in each case?

  10. Why is choice over displaying oneself to others important for social beings? What is the right to control this called? Who has the right to display your name in a telephone listing? Who has the right to remove it? Does the same apply to an online registry listing? Investigate three online cases and report what they do.

  11. How do information entities differ from objects? How do spaces differ from items? What is the object hierarchy and how does it arise? What is the first space? What operations apply to spaces but not items? What operations apply to items but not spaces? Can an item become a space? Can a space become an item? Give examples.

  12. How do comments differ from messages? Define the right to comment as an AEO triad. If a comment becomes a space, what is it called? Demonstrate with three commenting STSs. For systems that allow “deep” commenting (comments on comments on comments, etc), what is going on? (Look at who adds). Would a chat type conversation function be simpler than so many indents?

  13. For each operation set below, explain the differences, give examples, and give another variant:

    • Delete: Delete, undelete, destroy.

    • Edit: Edit, append, version, revert.

    • Create: Create.

    What is the difference between create and edit? Define a fourth operation set.

  14. Is viewing an object an act upon it? Is viewing a person an act upon them? How is viewing a social act? Can viewing an online objects be a social act? Why is viewing necessary for social accountability?

  15. What is communication? Is an information transfer a communication, e.g. a download? Why should communication require mutual consent? What happens if it isn't mutual? How does opening a channel differ from sending a message? Can a sender be anonymous to a receiver? Can a receiver be anonymous to a sender? Can senders or receivers be anonymous to the transmission system? Describe online systems that enable channel control.

  16. Answer the following for a landline phone, mobile phone and Skype: How does the communication request manifest? What information does a receiver get and what choices do they have? What happens to anonymous senders? How does one create an address list? What else is different?

  17. What is a role? Can it be empty or null? How is a role like a maths variable or computing pointer? Give role examples from three popular STSs. For each, give the ACS triad, stating what values vary. What other values could vary? Use this to suggest new useful roles.

  18. How can roles, by definition, vary? For three different STSs, describe how each role variation type might work. Give three different examples of implemented roles and suggest three future developments.

  19. If you unfriend a person, should they be informed? Test and report what actually happens on three common SNs. Must a banned bulletin board “flamer” be notified? What about someone kicked out of a chat room? What is the general principle here?

  20. What is a meta-right? Give physical and online examples. How does it differ from other rights? Is it still a right? Can an ACS act on meta-rights? Are there ACS meta-meta-rights? If not, why not? What then does it mean to "own" an entity?

  21. Why can’t an ACS creating an item be an act on that item? Why can’t it be an act on nothing? What then is it an act upon? Illustrate with online examples.

  22. Who owns a newly created information entity? By what social principle? Must this always be so? Find online cases where you create a thing online but don’t fully own it.

  23. In a space, who, initially, has the right to create in it? How then can others create in that space? What are creation conditions? What is the justification? Illustrate object, operation, access, visibility and edit conditions. How does transparency apply?

  24. Give three examples of creating an entity in a space. For each, specify the owner, parent, ancestors, offspring and local public. Which role(s) can the owner change?

  25. For five different STS genres, give examples of online creation conditions. Create something in each. Was the result transparent? Find two examples of non-transparent creations.

  26. For the following, explain why or why not. Suppose you are the chair of a computer conference with several tracks. Should a track chair be able to exclude you, or hide a paper from your view? Should you be able to delete a paper from their track? What about their seeing papers in other tracks? Should a track chair be able to move a paper submitted to their track by error to another track? Investigate and report comments you find on online systems that manage academic conferences.

  27. An online community has put an issue to a member vote. Evaluate these STS options:

    1. Voters can see how others voted, by name, before they vote.

    2. Voters can see the vote average before they vote.

    3. Voters can only see the vote average after they vote, but before all voting is over.

    4. Voters can only see the vote average after all the voting is over.

    Find online votes to illustrate. Do the same for these voting options:

    1. Voters aren't registered, so one person can vote many times.

    2. Voters are registered, but can change their one vote any time.

    3. Voters are registered, and can only vote once, with no edits.

    Can the person calling the vote legitimately define these vote conditions? What if they set conditions like all votes must be signed and will be made public?

  28. Is posting a video online like posting a notice in a local shop window? Explain, covering permission to post, to display, to withdraw and to delete. Can a post be deleted? Can it be rejected? Explain the difference. Give online examples.

  29. Give physical and online examples of rights re-allocations. Specify rights and meta-rights. If four authors publish a paper online, list the ownership options. Discuss how each might work out in practice. Which would you prefer and why?

  30. Should delegating give the right to delegate? Explain, with physical and online examples. What happens to ownership and accountability if delegatees can delegate? Discuss a worst case scenario.

  31. If a property is left to you in a will, can you refuse to own it, or is it automatically yours? What rights can't be allocated without consent? What can? Which of these rights can be freely allocated: Paper author. Paper co-author. Track chair. Being friended. Being banned. Bulletin board member. Logon ID. Bulletin board moderator. Online Christmas card access? Which require receiver consent?

  32. Investigate how SN connections multiply. For you and four friends, list the number of friends and the average. Based on this, estimate the total possible friends of friends in general. By looking at your friend’s friend lists, give, in your case, the friends of friends actual. Estimate how many messages or notifications you get from all your friends per week. From that, estimate the average messages per friend per day. So if you friended all your friend's friends, potentially, how many messages could you expect per day? What if you friended your friend's friend's friends too? Why is the number so large? Discuss the film, Six Degrees of Separation.

  33. Demonstrate how to "unfriend" a person in three social networks. Are they notified? Is unfriending "breaking up"? That an "anti-friend" is an enemy, suggests "anti-Facebook" sites. Investigate technology support for people you hate, e.g. celebrities or my relationship ex. Try anti-organization sites, like sickfacebook.com. What purpose could technology support for anti-friendship serve?

24.5 Part 5: The future

The future isn't technical or social but both

24.5.1 Technology utopianism

Technology utopianism is the belief that technology alone creates the future. It is popular in fiction, e.g. Rosie in The Jetsons, C-3PO in Star Wars and Data in Star Trek are robots that read, talk, walk, converse, think and feel. As we do these things easily, how hard could it be? In films, robots learn (Short Circuit), reproduce (Stargate's replicators), think (The Hitchhiker's Guide's Marvin), become self-aware (I, Robot) and eventually replace us (The Terminator, The Matrix). In this view, computers are an unstoppable evolutionary juggernaut (Figure 21), but right now they couldn't conquer a planet of cockroaches.

Technological utopianism

Author/Copyright holder: Unknown (pending investigation). Copyright terms and licence: Unknown (pending investigation). See section "Exceptions" in the copyright terms below.

Figure 24.21: Technological utopianism

Despite cartoons and science fiction, today's housework robot is the Roomba (Figure 24.22). It picks up dirt, but user problems include turning in circles, paralysis, thread and hair clogs and getting stuck. Even washing it in water kills it. Make no mistake, this is a clever robot, yet it needs help to survive in a house with flat terrain, no enemies and free food. (footnote 93) Dropped outside or left alone, it can't survive. Even insects do better.

Author/Copyright holder: Courtesy of Fleischer Brothers production. Copyright terms and licence: pd (Public Domain (information that is common property and contains no original authorship)).

Author/Copyright holder: Courtesy of Brayon Michael Pieske. Copyright terms and licence: pd (Public Domain (information that is common property and contains no original authorship)).


Author/Copyright holder: Courtesy of Larry D. Moore. Copyright terms and licence: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0).

Figure 24.22 A-B-C: Robot help in: a. Superman, b. Star Wars, c. Reality (the first iRobot Roomba version)

Conversation is the same. People chat easily about many things, but the computers next to us are strangely silent. The technology exists already so why don't cars talk to us like KITT in Knight Rider? Is it because computer conversations are embarrassing? (footnote 94)

Spatial coordination is the same. Compare the Robot World Cup and the human World Cup. Artificial Intelligence (AI) advocates say robots will surpass people in forty years, but they said that forty years ago. Like a mirage, AI breakthroughs are always forty years hence — or is it forty thousand years? For vision, computers still struggle with retinal let alone cortical activity:

Computers are no real competition for the human brain in areas such as vision, hearing, pattern recognition and learning. ... And when it comes to operational efficiency there is no contest at all. A typical room-size supercomputer weights roughly 1,000 times more, occupies 10,000 times more space and consumes a millionfold more power ...

-- Boahen, 2005

The point isn’t what computers can’t do, but that tasks like talking, walking and thinking aren’t as easy as they might seem. Technology utopianism predict a "singularity" based on Moore's law, that computer processing power doubles every eighteen months (footnote 95). It says that shortly, super-intelligent computers will replace people (Kurzweil, 1999). This “big lie” (footnote 96) fantasy, sees the future as just more of the same processing computers already have. Yet evolution is never more of the same, and the brain isn't just a big computer.

Letraset page for letter 'A'

Figure 24.23: Letraset page for letter 'A'

Computers calculate better than us, as cars travel faster and cranes lift more, but calculating isn't all the brain does. Simple processing (footnote 97) works for simple cases, but real tasks like vision, hearing, thinking and conversing are productive, i.e. their information increases geometrically with size. (footnote 98) The productivity of language is that five year olds can speak more sentences than they could learn in a lifetime at a sentence per second (Chomsky, 2006). Children easily see that a Letraset page (Figure 23) is all ‘A’s, but computers struggle with such productive variation. Using pixel level processing for pattern recognition is: “like trying to understand bird flight by studying only feathers. It just cannot be done.” (Norman, 1990). AI experts who saw beyond the hype knew decades ago that productive tasks like language wouldn't be solved anytime soon (Copeland, 1993).

The exponential growth of simple process power

Author/Copyright holder: Courtesy of Ray Kurzweil and Kurzweil Technologies, Inc. Copyright terms and licence: CC-Att-SA-1 (Creative Commons Attribution-ShareAlike 1.0 Unported).

Figure 24.24: The exponential growth of simple process power

The bottom line for simple processing is the 99% barrier, e.g. 99% accurate computer voice recognition makes one error per 100 words, but an error per minute is well below conversation standards. For computer auto-drive cars, 99% accuracy is an accident a day! In the 2005 DARPA Grand Challenge, five of 23 autonomous vehicles finished a simple course (Miller et al, 2006). In 2007, six of eleven better funded vehicles finished an urban track with a top average speed of 14mph. Yet skilled people drive for decades on harder roads, in worse weathers, in heavier traffic, and faster, with no accidents. (footnote 99) The brain didn't cross the 99% performance barrier just by increasing simple processing power.

Author/Copyright holder: Courtesy of Dmadeo. Copyright terms and licence: CC-Att-SA-3 (Creative Commons Attribution-ShareAlike 3.0).

Author/Copyright holder: MGM. Copyright terms and licence: All Rights Reserved. Used without permission under the Fair Use Doctrine (as permission could not be obtained). See the "Exceptions" section (and subsection "allRightsReserved-UsedWithoutPermission") on the page copyright notice.

Figure 24.25 A-B: Leftmost: Kim Peek was the inspiration for the film, Rain Man. Rightmost: Dustin Hoffman in the role of Rain Man

How can a brain handle “incalculable” tasks? It is an information processor. Its trillion (1012) neurons are biological on/off devices powered by electricity that allow logic gates (McCulloch and Pitts, 1943), i.e. in principle no different from transistors. If processing power really depends on neuron/transistor numbers, computers should be at the brain's potential soon. Figure 24.24 suggests that computers processed as an insect in 2000, as a mouse in 2010, will be as a human in 2020 and beyond all humans in 2045. Of course this is nonsense, as right now computers can't even do what ants do, with a neuron sliver. Or bees or cockroaches or flying beetles. How will they then jump to conversation, pattern recognition and learning in a few decades?

The reason is that calculating power wasn't the answer to incalculable tasks, as our brain, in its evolution, discovered. In savant syndrome, people who can calculate 20 digit prime numbers in their head need full time care to live in society, e.g. Kim Peek, who inspired the movie Rain Man, could recall every word on every page of over 9,000 books, including all Shakespeare and the Bible, but had to be cared for by his father (Figure 25). He was neurologically disabled, as later parts of his brain didn't arise.

Savants then are the brain working without its more recent sub-systems. That they calculate better suggests that the brain tried simple processing power and evolved past it. In contrast, technology utopians still don’t see that more of the same isn’t evolution.

Computers are electronic savants, calculation wizards that need minders to survive in the real world. If computers excel at the sort of processing the brain outgrew a million years ago, how are they the future? If super-computers built from PC video cards running in parallel are the future of computing, then bigger oxes are the future of farming! How can AI surpass HI (Human Intelligence) if it isn't even going in the same direction?

A system's performance isn’t just its parts but also how they connect. Computers today follow von Neumann's architecture, but the brain didn't, e.g. it has no CPU (Sperry and Gazzaniga, 1967). It crossed the 99% performance barrier by taking design risks von Neumann avoided (Whitworth, 2009c). The processing of processing is avoided by computer science as it gives infinite loops, yet it allows symbolism - linking one brain neural assembly (a symbol) to another (a perception). This is the basis of meaning and language. Processing changes information so assumes a context. (footnote 100) Only by the processing of processing can we modify contexts, i.e. learn. Denying computers this option denied them meaning.

Rather than an inferior biological version of today's computers, the brain is a different kind of processor altogether. It processes its own processing to give language, mathematics and philosophy. The answer to the productivity problem wasn't more processing but the processing of processing. By this risky step, the brain perceives a "self", "others", "friends" and "community", the same constructs that human and computer savants struggle with. If today's super-computers aren't even in the same processing league as the brain (footnote 101), technology utopians are computing traditionalists posing as futurists.

24.5.2 The socio-technical vision

Mr. Clippy takes charge

Author/Copyright holder: Unknown (pending investigation). Copyright terms and licence: Unknown (pending investigation). See section "Exceptions" in the copyright terms below.

Figure 24.26: Mr. Clippy takes charge

The question facing computing isn't when it will replace people but when people will see it for what it is, e.g. Mr. Clippy, Office 97’s paper clip assistant (Figure 26):

It wouldn’t go away when you wanted it to. It interrupted rudely and broke your train of thought.

-- Pratley, 2004

Responses included “Die, Clippy, Die!” (Gauze, 2006), but its Microsoft designer still wondered: “If you think the Assistant idea was bad, why exactly?” The specific answer is: because it thought it was in charge. In Windows XP, Mr. Clippy was replaced by tags smart enough to know their place. Software that tries to be "smart" by itself quickly ends up like the sorcerer's apprentice.

Why tie up twenty-million-dollar super-computers to try to do what brains already do, with millions of years of real life beta-testing? Even if we redesign computers to work like the brain, say as neural nets, who is to say they won’t inherit the same weaknesses? If the brain has solved the productivity problem as well as can be expected, lets change the goals of computing, from human mimicry to human assistance.

This is already happening. Driverless cars are still a dream but reactive cruise control, range sensing and assisted parallel parking already exist (Miller et al, 2006). Computer surgery struggles but computer-supported remote surgery and computer-assisted surgery are here today. Robots run clumsily but people with robotic limbs are more than able. Computer piloted drones are a liability but remotely piloted drones are an asset. Computer-generated animations are great, but state-of-the-art animations like Avatar combine human actors and computers. Chess players advised by computers perform better than either alone. (footnote 102) In killer applications of the last decade, from email to Facebook, people do what they do best and technology do what it does best, e.g. email transmits information and people create meaning. So "horses for courses" is letting computers process information and people process meaning. That meaning is a level above information, implies that people should "mind" computers and computers shouldn't control people.

The socio-technical vision

Author/Copyright holder: Unknown (pending investigation). Copyright terms and licence: Unknown (pending investigation). See section "Exceptions" in the copyright terms below.

Figure 24.27: The socio-technical vision

Socio-technology is about technology and people, with the latter the "elder" system (Figure 27). If people direct technology it may go wrong, but if technology directs people it will go wrong. Higher levels directing lower ones is evolution, but lower ones directing higher ones is devolution. To focus on lower levels, because they are easier, isn't progress. (footnote 103)To see the Internet only in technical terms is to underestimate it, again! Let computers be background not foreground, as pervasive and ubiquitous computing theories propose. Technology should merge with people, not the other way around. Technology without a human context isn't even useless - it is pointless. If “technology is the future”, something mindless and heartless is in charge of us. So the future is socio-technology not technology.

Some say the Internet is making us stupid (footnote 104)but a mirror just reflects. Online media showing human brutality, corruption or stupidity just reveal what is. The Internet, as a microscope and telescope on humanity, is showing us to us. It isn't physical, but thoughts cause words and deeds as guns fire bullets. Humanity’s thoughts are now online for us to choose. We, the human race, are choosing what we think and what we think is now online, with web-counters keeping the score. What the Internet electronic mirror shows isn't always pretty but it is real and to change oneself one must first see oneself. The evolution of computing is a part of human evolution, of a social experiment that has been ongoing for thousands of years. Only by personal evolution, by seeing beyond ourselves, do we help it succeed.

24.5.3 Discussion questions

Research selected questions from the list below. If you are reading this chapter as part of a class - either at university or a commercial course - you can research these questions in pairs and report back to the class, with reasons and examples.

  1. What is technology utopianism? Give examples from movies. What is the technology singularity? In this view, why must computers take over from people? What is the false assumption here?

  2. What technology advances did the last century expect by the year 2000? Which ones are we are still awaiting? What do people expect robots to be doing by 2050? What is realistic? How do robot achievements like the Sony dog rank? How might socio-technical design improve the Sony dog? In the socio-technical paradigm, how will robots evolve? Give examples.

  3. If super-computers achieve the processing power of one human brain, then many brains, are many people together more intelligent than one? Review the "Madness of Crowds" theory, that people are less intelligent together. Give examples. Why doesn't adding more programmers to a project always finish it quicker? What, in general, affects whether parts perform better together? Is a super computer, with as many transistors as the brain has neurons, its processing equal? Explain.

  4. How do today's super computers increase processing power? List the processor cores of the top ten? Which use NVidia PC graphic board cores? How is this power utilized in real computing tasks? How does processing cores operating in sequence or parallel affect performance? How is that decided in practice? (CS students only).

  5. Review the current state-of-the-art for automated vehicles, whether car, plane, train, etc. Are any fully "pilotless" vehicles currently in use? What about remotely piloted vehicles? When does full computer control work? When doesn't it? (hint: consider active help systems). When might full computer control of a vehicle be useful? Suggest how computer control of vehicles will evolve, with examples.

  6. What is the 99% barrier? Why is the last 1% of accuracy a problem for productive tasks? Give examples from language, logic, art, music, poetry, driving and another. How common are such tasks in the world? How does the brain handle them?

  7. What is a human savant? Give examples past and present. What tasks do savants do easily? Can they compete with modern computers? What tasks do savants find hard? What is the difference? Why do savants need support? If computers are like savants, what support do they need?

  8. Find three examples of software that, like Mr. Clippy, thinks it knows best. Give examples of: 1. Acts without asking, 2. Nags, 3. Changes secretly, 4. Makes you work.

  9. Think of a personal conflict you would like advice on. Keep it simple and clear. Now try these three options. In each case explain and ask the question the same way:

    1. Go to your bedroom alone, put a photo of family member you like on a pillow. Explain and ask the question out loud, then imagine their response.

    2. Go to an online computer like http://cleverbot.com/ and do the same.

    3. Ring an anonymous help line and do the same.

    Compare and contrast the results. Which was the most helpful?

  10. A rational way to decide is to list all the options, assess each one and pick the best. How many options are there for these contests: 1. Checkers, 2. Chess, 3. Civilization (a strategy game), 4. A MMORPG, 5. A debate. Which ones are computers good at? What do people do if they can't calculate all the options? Can a program do this? How do online gamers rate human and AI opponents? Why? Will this always be so?

  11. Mr. Clippy was based on Bayesian logic. What data drove his decisions? What was left out? Why did users find him rude? Why couldn't he recognize rejection? What users liked Mr. Clippy? Turn on the auto-correct in Word and try writing the equation: i = 1. Why does Word get it wrong? How can you fix it without turning off auto-correct? Give online examples of recommending and taking charge.

  12. What is the difference between syntax and semantics in language? What are programs good at? Look at text-to-speech systems, like here, or translators here. How successful are they? Are computers doing what people do? At what level is the translating occurring? Are they semantic level transformations? Discuss John Searle's Chinese room thought experiment.

24.6 Acknowledgements

Thanks to the first author's wife for helpful advice, to the students of 158729 (STS Design) at Massey University for trying out the questions. Also thanks to Yijing Qian for Figure 2 and Figure 9.

24.7 References

Ackerman, Mark S. (2000): The Intellectual Challenge of CSCW: The Gap Between Social Requirements and Technical Feasibility. In Human-Computer Interaction, 15 (2) pp. 181-203

Ahmad, Adnan and Whitworth, Brian (2011): Distributed access control for social networks. In 2011 7th International Conference on Information Assurance and Security IAS, pp. 68-73

Alberts, Bruce, Bray, Dennis, Lewis, Julian, Raff, Martin, Roberts, Keith and Watson, James D. (1994): Molecular Biology of the Cell 3E. Garland Science

Alexander, Christopher (1964): Notes on the Synthesis of Form. Harvard University Press

Alter, Steven (1999): A general, yet useful theory of information systems. In Communications of the AIS, 1 (3)

Beer, David and Burrows, Roger (2007): Sociology and, of and in Web 2.0: Some Initial Considerations. InSociological Research Online, 12 (5) pp. 1-15

Benkler, Yochai (2002): Coase's Penguin, or, Linux and "The Nature of the Firm. In Yale Law Journal, 112 (3) pp. 369-446

Berners-Lee, Tim (2000): Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web.HarperBusiness

Bertalanffy, Ludwig Von (1968): General System Theory: Foundations, Development, Applications (Revised Edition). George Braziller Inc

Boahen, Kwabena (2005): Neuromorphic Microchips. In Scientific American, 292 (5) pp. 56-63

Borenstein, Nathaniel S. and Thyberg, Chris A. (1991): Power, Ease of Use and Cooperative Work in a Practical Multimedia Message System. In International Journal of Man-Machine Studies, 34 (2) pp. 229-259

Boutin, Paul (2004): Can e-mail be saved. In Infoworld, 14

Burk, Dan L. (2001): Copyrightable functions and patentable speech. In Communications of the ACM, 44 (2) pp. 69-75

Callahan, David (2004): The Cheating Culture: Why More Americans Are Doing Wrong to Get Ahead. Mariner Books

Campbell-Kelly, Martin (2008): Historical reflections: Will the future of software be open source?. InCommunications of the ACM, 51 (10) pp. 21-23

Chomsky, Noam (2006): Language and Mind. Cambridge University Press

Chung, Lawrence, Nixon, Brian A., Yu, Eric and Mylopoulos, John (1999): Non-Functional Requirements in Software Engineering (THE KLUWER INTERNATIONAL SERIES IN SOFTWARE ENGINEERING Volume 5).Springer

Clark, D. D. and Wilson, D. R. (1987): A Comparison of Commercial and Military Computer Security Policies. In:IEEE Symposium on Security and Privacy 1987 1987. pp. 184-195

Cohen, Bram (2003): Incentives Build Robustness in BitTorrent. In Workshop on Economics of PeertoPeer systems, 6 (22)

Copeland, Jack (1993): Artificial Intelligence: A Philosophical Introduction. Wiley-Blackwell

Cysneiros, L. M. and Leite, Julio Cesar Sampaio do Prado (2002): Non-functional requirements:from Elicitation to modeling languages. In Computer, 35 (3) pp. 8-9

David, Julie Smith, McCarthy, William E. and Sommer, Brian S. (2003): Agility: the key to survival of the fittest in the software market. In Communications of the ACM, 46 (5) pp. 65-69

Davis, Fred D. (1989): Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. In MIS Quarterly, 13 (3) pp. 319-340

Department of Defense (1985). TCSEC - Trusted Computer Security Evaluation Criteria (TCSEC), DOD 5200.28-STD. Retrieved 19 May 2012 from Department of Defense:

Diamond, Jared M. (1998): Guns, Germs, and Steel: The Fates of Human Societies. W. W. Norton and Company

Esfeld, Michael (1998): Holism and analytic philosophy. In Mind, 107 (426) pp. 365-380

Ferraiolo, David F. and Kuhn, D. Richard (2004): Role Based Access Control. In Review Literature And Arts Of The Americas, 14 (5) pp. 554-563

Figart, Deborah M. and Golden, Lonnie (eds.) (2000): Working Time: International Trends, Theory and Policy Perspectives (Routledge Advances in Social Economics). Routledge

Forman, Bruce Jay and Whitworth, Brian (2007): Information Disclosure and the Online Customer Relationship. In:Quality, Values and Choice Workshop, Computer Human Interaction 2007, Portland, Oregon, USA. pp. 1-7

Freeden, Michael (1991): Rights (Concepts in Social Thought).

Freudenthal, Eric, Pesin, Tracy, Port, Lawrence, Keenan, Edward and Karamcheti, Vijay (2002): dRBAC: distributed role-based access control for dynamic coalition environments. In Proceedings 22nd International Conference on Distributed Computing Systems, pp. 411-420

G., Geen, R. and Gange, J. J. (1983): Social facilitation: Drive theory and beyond. In: Blumberg, Herbert H. (ed.). "Small Groups and Social Interaction: v. 2 (Small Groups & Social Interactions)". John Wiley and Sons Ltdp. 141–153

Gediga, Gunther, Hamborg, Kai-Christoph and Duntsch, Ivo (1999): The IsoMetrics Usability Inventory: An Operationalization Of ISO 9241-10 supporting summative and formative evaluation of software systems. InBehaviour and Information Technology, 18 (3) pp. 151-164

Hoffman, L. R. and Maier, N. R. F. (1961): Quality and acceptance of problem solutions by members of homogenous and heterogenous groups. In Journal of Abnormal and Social Psychology, 62 pp. 401-407

Johnson, Deborah G. (2001): Computer Ethics (3rd Edition). Prentice Hall

Jonsson, Erland (1998): An integrated Framework for Security and Dependability. In Information Security, pp. 22-29

Kant, Immanuel (1999): Critique of Pure Reason (The Cambridge Edition of the Works of Immanuel Kant).Cambridge University Press

Karp, Alan H., Haury, Harry and Davis, Michael H. (2009): From ABAC to ZBAC : The Evolution of Access Control Models From ABAC to ZBAC : The Evolution of Access Control Models. In Control, (0) pp. 22-30

Keeney, Ralph L. and Raiffa, Howard (1976): Decisions with Multiple Objectives: Preferences and Value Tradeoffs.Cambridge University Press

Kelly, Erin (ed.) (2001): Justice as Fairness: A Restatement. Belknap Press of Harvard University Press

Kienzle, Darrell M. and Wulf, William A. (1998): A practical approach to security assessment. In: Proceedings of the 1997 workshop on New security paradigms 1998. pp. 5-16

Knoll, Kathleen and Jarvenpaa, Sirkka L. (1994): Information technology alignment or “fit� in highly turbulent environments: the concept of flexibility. In: Proceedings of the 1994 computer personnel research conference on Reinventing IS managing information technology in changing organizations managing information technology in changing organizations 1994. pp. 1-14

Kurzweil, Ray (1999): The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Viking Adult

Lampson, B. W. (1969): Dynamic protection structures. In: Proceedings of the November 18-20, 1969, fall joint computer conference 1969. pp. 27-38

Lessig, Lawrence (1999): Code and Other Laws of Cyberspace. Basic Books

Lindquist, Christopher (2005): Fixing the requirements mess. In CIO, 0

Locke, John (1963): An essay concerning the true original extent and end of civil government: Second of "Two Treatises on Government" (1690). In: Somerville, John and Santoni, Ronald (eds.). "Social and Political Philosophy: Readings From Plato to Gandhi". Anchorp. 169–204

Lorenz, E. N. (1963): Deterministic Nonperiodic Flow. In Journal of the Atmospheric Sciences, 20 (2) pp. 130-141

Losavio, Francisca, Chirinos, Ledis, Matteo, Alfredo, Levy, Nicole and Ramdane-Cherif, Amar (2004): Designing Quality Architecture: Incorporating ISO Standards into the Unified Process. In IS Management, 21 (1) pp. 27-44

Mandelbaum, Michael (2002): The Ideas that Conquered the World: Peace, Democracy, and Free Markets in the Twenty-first Century. PublicAffairs

McCulloch, Warren S. and Pitts, Walter H. (1943): A logical calculus of the ideas immanent in nervous activity. InBulletin of Mathematical Biophysics, 5 (4) pp. 115-133

MessageLabs (2006). The year spam raised its game; 2007 predictions. Retrieved 19 May 2012 from MessageLabs:

MessageLabs (2010). Intelligence Annual Security Report, 2010. Retrieved 19 May 2012 from MessageLabs:

Meyrowitz, Joshua (1985): No Sense of Place: The Impact of Electronic Media on Social Behavior. Oxford University Press, USA

Miller, George A. (1956): The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. In Psychological Review, 63 pp. 81-97

Miller, Isaac, Garcia, Ephrahim and Campbell, Mark (2006): To Drive Is Human. In IEEE Computer, 39 (12) pp. 52-56

Mitchell, William J. (1995): City of Bits: Space, Place, and the Infobahn (On Architecture). The MIT Press

Moreira, Ana, Araújo, João and Brito, Isabel (2002): Crosscutting quality attributes for requirements engineering. In Proceedings of the 14th international conference on Software engineering and knowledge engineering SEKE 02, (0)

Norman, Donald A. (1990): The Design of Everyday Things. New York, Doubleday

Nuseibeh, Bashar and Easterbrook, Steve (2000): Requirements engineering: a roadmap. In: Proceedings of the Conference on The Future of Software Engineering 2000. pp. 35-46

OECD (1996). Guidelines for the Security of Information Systems. Retrieved 19 May 2012 from OECD:

Penrose, Roger (2005): The Road to Reality : A Complete Guide to the Laws of the Universe. Knopf

Pinto, Jeffrey K. (2002): Project Management 2002. In Research Technology Management, 45 (2) p. 22–37

Porra, Jaana and Hirschheim, Rudy (2007): A Lifetime of Theory and Action on the Ethical Use of Computers: A Dialogue with Enid Mumford. In Journal of the Association for Information Systems, 8 (9) pp. 467-478

Poundstone, William (1992): Prisoner's Dilemma. Anchor

Raymond, Eric S. (1999): The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary. OReilly Media

Regan, Priscilla M. (1995): Legislating Privacy: Technology, Social Values, and Public Policy. University of North Carolina Press

Reid, Fraser J. M., Malinek, Vlastimil, Stott, Clifford J. T. and Evans, Jonathan ST. B. T. (1996): The messaging threshold in computer-mediated communication. In Ergonomics, 39 (8) pp. 1017-1037

Ridley, Matt (2010): The Rational Optimist: How Prosperity Evolves. Harper

Rosa, Nelson S., Justo, George R. R. and Cunha, Paulo R. F. (2001): A framework for building non-functional software architectures. In Parallel Computing, pp. 141-147

Rose, E. (2000): Balancing internet marketing needs with consumer concerns: a property rights framework. InACM SIGCAS Computers and Society, 30 (2) pp. 20-24

Samuelson, Pamela (2003): Unsolicited communications as trespass?. In Communications of the ACM, 46 (10) pp. 15-20

Sanai, Hakim Abu L Majd Madud (1968): The Enclosed Garden of Truth. Theophania Publishing

Sanders, Mark S. and McCormick, Ernest J. (1993): Human Factors In Engineering and Design. McGraw-Hill Science

Seabold, Daniel E., Honemann, Daniel H. and Balch, Thomas J. (eds.) (1993): Robert's Rules of Order Newly Revised, 11th edition. Da Capo Press

Shannon, Claude E. and Weaver, Warren (1949): The Mathematical Theory of Communication. University of Illinois Press

Shannon, Claude E. and Weaver, Warren (1971): The Mathematical Theory of Communication. University of Illinois Press

Shirky, Clay (2008): Here Comes Everybody: The Power of Organizing Without Organizations. Penguin Press

Short, John, Williams, Ederyn and Christie, Bruce (1976): Visual communication and social interaction - The role of 'medium' in the communication process. In The Social Psychology of Telecommunications, pp. 43-60

Simone, Mauricio De and Kazman, Rick (1995): Software architectural analysis: an experience report. In CASCON 95 Proceedings of the 1995 conference of the Centre for Advanced Studies on Collaborative research,

Skinner, Burrhus F. (1948): 'Superstition' in the pigeon. In Journal of Experimental Psychology, 38 (2) pp. 168-172

Smith, Heather A., Kulatilaka, Nalin and Venkatramen, N. (2002): Developments in IS practice III: Riding the wave: extracting value from mobile technology. In Communications of the Association for Information Systems, 8 (0) pp. 467-481

Sommerville, Ian (2004): Software Engineering (9th Edition). Addison Wesley

Spence, Robert and Apperley, Mark (2011). Bifocal Display. Retrieved 4 November 2013 from [URL to be defined - in press]

Sperry, R. W. and Gazzaniga, M. S. (1967): Language following surgical disconnexion of the hemispheres. In:Millikan, Darley (ed.). "Brain Mechanism Underlying Speech and Language". Grune and Stratton

Tenner, Edward (1997): Why Things Bite Back: Technology and the Revenge of Unintended Consequences (Vintage). Vintage

Thompson, Mary, Johnston, William, Mudumbai, Srilekha, Hoo, Gary, Jackson, Keith and Essiari, Abdelilah (1999):Certificate-based Access Control for Widely Distributed Resources. In Proceedings of 8th USENIX Security Symposium, pp. 215-228

Toffler, Alvin (1980): The Third Wave. Bantam

Weiss, Aaron (2003): Ending spam's free ride. In netWorker, 7 (2) pp. 18-24

Whitworth, Brian (2009b): The social requirements of technical systems. In: Whitworth, Brian and Moor, Aldo de (eds.). "Handbook of Research on Socio-Technical Design and Social Networking Systems (2-Volumes)". Information Science Reference

Whitworth, Brian (2011): The Virtual Reality Conjecture. In Prespacetime Journal, 2 (9) p. 1404–1433

Whitworth, Brian (2009a): A Comparison of Human and Computer Information Processing. In: Pagani, Margherita (ed.). "Encyclopedia of Multimedia Technology and Networking (2 Volume Set)". Idea Group Publishingp. 230–239

Whitworth, Brian (2006): Measuring disagreement. In: Reynolds, Rodney A., Woods, Robert and Baker, Jason D. (eds.). "Handbook of Research on Electronic Surveys and Measurements".

Whitworth, Brian and Bieber, Michael (2002): Legitimate Navigation Links. In: ACM Hypertext 2002, Demonstrations and Posters 2002, Maryland, USA. pp. 26-27

Whitworth, Brian and Friedman, Robert S. (2009): Reinventing Academic Publishing Online. Part I: Rigor, Relevance and Practice. In First Monday, 14 (8)

Whitworth, Brian and Liu, Tong (2008): Politeness as a Social Computing Requirement. In: Luppicini, Rocci (ed.). "Handbook of Conversation Design for Instructional Applications (Premier Reference Source)". Information Science Referencepp. 419-436

Whitworth, Brian and Liu, Tong (2009): Channel E-mail: A Sociotechnical Response to Spam. In IEEE Computer, 42 (7) pp. 63-72

Whitworth, Brian and Moor, Aldo de (2002): Legitimate by Design: Towards Trusted Virtual Community Environments. In: HICSS 2002 2002. p. 213

Whitworth, Brian and Whitworth, Alex P. (2010): The social environment model: Small heroes and the evolution of human society. In First Monday, 15 (11)

Whitworth, Brian and Whitworth, Elizabeth (2004): Spam and the Social-Technical Gap. In Computer, 37 (10) pp. 38-45

Whitworth, Brian, Bañuls, Victor, Sylla, Cheickna and Mahinda, Edward (2008): Expanding the Criteria for Evaluating Socio-Technical Software. In IEEE Transactions on Systems, Man, and Cybernetics, 38 (4) pp. 777-790

Whitworth, Brian, Gallupe, Brent and McQueen, Robert (2000): A cognitive three-process model of computer-mediated group interaction. In Group Decision and Negotiation, 9 (5) pp. 431-456

Whitworth, Brian, Gallupe, Brent and McQueen, Robert (2001): Generating agreement in computer-mediated groups. In Small Group Research, 32 (5) pp. 625-665

Wright, Robert (2001): Nonzero: The Logic of Human Destiny. Vintage

Chapter TOC
501 shares
Download PDF

Open Access—Link to us!

We believe in Open Access and the democratization of knowledge. Unfortunately, world-class educational materials such as this page are normally hidden behind paywalls or in expensive textbooks.

If you want this to change, , link to us, or join us to help us democratize design knowledge!