Permeable Computing: A Framework for Perception-Altering Technologies, Designed in Defense of Self
“Myself” is a strange portmanteau birthed of the linguistic inconvenience of attempting to describe intangibilities discovered during the inward gaze:
“While I was building that bird house, I missed the nail with the hammer and hit myself,” or “I wasn’t sure if I was dreaming so I pinched myself.”
Especially in Western societies drenched in the primacy of the autonomous individual, the word is a utilitarian necessity at best and a gross understatement at worst. It is used interchangeably with words like “me” and “I” and lacks the gravitation that should be commensurate to such a fundamentally existential idea. “Myself” optimizes communication speed above all else; this is problematic. Alternatively, it is possible to conceive of the self as an integral yet recognizably disparate entity that is as much a tangible part of the whole human as a big toe, an elbow, or a nose. Imagine all the parts (emotional, physical including the aforementioned elbow, intellectual, etc.) that make up a person — in this schema, the personified self commands space. In this other way of seeing, the personified self is unimpeachable.
When we move through the world, we are assaulted by the things (often in the form of new technologies) that demand our attention. So we are not overwhelmed, we naturally filter that which we are presented in any given moment by assigning value to stimuli. Sometimes these stimulations are designed to subvert value filtration. The most powerful of these stimulations grab our attention despite our efforts to filter and then rewrite our own value filtration algorithms in their favor making it easier for their ilk to influence our consciousness in the future. The fields of user interface and user experience design are entirely a consequence of the desire of inventors of things to create these super-stimulations so as to more directly exert control over the agency and autonomy of users. When confronted with these types of stimulations, we question ourselves:
Am I willing to subject my mind to influence from Instagram in return for social connection and entertainment? Is it acceptable for me to operate a motor vehicle which contains a thousand pre-designed heuristics over which I have no control in return for speedy travel to my destination? Why did I buy this $300 winter jacket from Patagonia when Target sells something similar for a third of the cost?
When we think of these types of measurements in terms of myself versus the proposed gain or burden of a stimulation, we implicitly permit ourselves to ignore or under-appreciate the value of myself because myself is intangible. What is the value of the air that you are breathing right now? In one sense, it unimaginably valuable as you would die without it. In another sense that is validated by your brain’s ability to breathe autonomously, the air you are breathing has no value as it is omnipresent and there is virtually zero cost associated with finding more of it. In moments of stimulation, myself is valued like omnipresent air. How much you would have to be paid to review 30 minutes of ads for mortgage insurance every morning before you got out of bed? Now, ask the same question but ask it of your tangible personified self. In this instance, you must literally face your own self in order to negotiate that payment in the same way that you would when negotiating salary at the beginning of a new job. The personified self tangibly occupies mental real estate and so, when transacting with it, the cost of any given stimulation is more accurately evaluated.
The Rise of Novel Computing: A Clear Path Towards Immersion
Technology has unquestionably made our lives better: we live longer, are more connected, and are able to know more information more rapidly than ever before. That being said, contemporary technology is an ouroboros; it at least partially exists to serve and improve itself. We refined the study of human behavior to such a degree that the levers of influence in any given interaction are well-defined and easy to pull. We imbued our technology with this understanding:
“Cell phone subscriptions have increased from 2 billion in 2005 to over 7 billion in 2015… The prevalence of smartphone addictive type behaviour among university students has been estimated at 22 % in Belgium, 21 % in mainland China, and 42 % in Hong Kong. Moreover, objective measures have provided valuable insight into the smartphone usage patterns of Korean university students… Students who did not meet the criteria for smartphone addictive type behaviour spent a daily average of 3 h and 45 min engaged with their smartphones (Mac Cárthaigh, 2020).”
The physical form (and, by extension, function) of personal computing also has not substantially changed in the last four decades. While modern products are generally smaller and far faster than they were at historical launch, a computer technician from 1980 would likely recognize a desktop computer of 2021. Similarly, a user of the Apple Newton (the first of which was released in 1993) would likely see the immediate correlation to any modern smartphone.
Statism of form and function is now vanishing: an omnipotent Moore’s Law, advances in miniaturization of sensor packages, and continued advancement in biocompatible materials and technologies mean that human interface devices of the near future will break the familiar evolutionary process created over the last forty years. Facebook, undisputed heavyweight champion of social networking, employs just shy of 10,000 people (approximately one fifth of the firm) for the sole purpose of developing augmented and virtual reality technologies so as to broaden Facebook’s influence beyond the walled gardens of its flagship networks. The company also “has shifted its VR focus away from Oculus Rift-style tethered headsets by releasing the Oculus Quest and Quest 2, which are standalone wireless devices that don’t require a PC. (Byford, 2021).” Untethered VR, when coupled to a robust cellular data connection, will allow for immersive world-scale experiences that fully blur the line between physical and digital reality. Apple’s full-throated embrace of a modified form of LiDAR in its iPad and iPhone lines further confirms that VR and AR are here to stay. Rapidly rising adoption rates will snowball as developers discover revenue generation models built on new content and productivity gambits. Pokemon Go, the old-by-Internet-standards augmented reality app, grossed nearly $6 billion dollars since its launch in 2016. Niantic, the developer of Pokemon Go, cleverly modified the game to work entirely within the confines of the home in response to the widespread public health quarantines of the COVID19 pandemic and revenue skyrocketed. Nearly a third of the game’s $6 billion dollar lifetime gross revenue was accrued during 2020 alone (Tassi, 2021). Pokemon Go notably uses only GPS, accelerometer, and camera data for its augmented reality deployment while refraining from some of the higher-end sensing technologies (e.g. LiDAR, photogrammetry, point cloud scanning, etc.) that enable a more complete immersion experience.
All of the same conditions associated with contemporary technology addiction (and all of the same accrued knowledge of UI/UX best practices) also exist for immersive AR/VR except that AR/VR are orders of magnitude more engrossing. While very difficult, engaging with a simple AR app like Pokemon Go while doing something else is manageable. Division of attention is already impossible with even the most rudimentary VR computing platforms; it will only be easier to become more fully engrossed in the future.
While such technologies are only beginning to scratch the surface of possibility as our understanding of the brain is still somewhat limited, brain-computer interfaces (BCI) appear to be the next frontier in immersive computation:
Pager, the nine year old macaque, now plays Pong by way of a Neuralink BCI implanted into his brain. While this demo remains stunning, this particular implementation of a brain computer interface is not new. Schmidt et al. demonstrated similar results in their 1978 study on the precentral gyrus of three macaques entitled “Fine Control of Operantly Conditioned Firing Patterns of Cortical Neurons”. Similarly, the first brain computer interface that actively generated crude visual “phosgenes” via electrical signal in a human subject was implanted into the brain of a blind man named Jerry by Dr. William Dobelle around the same time as the Schmidt et al. study (Kotler, 2018). The first BCI for motor control was successfully implanted in 1998 into an immobilized patient suffering from post-stroke locked in syndrome; the patient was able to control a computer cursor using only thought (Kennedy & Bakay, 1998). Elon Musk, founder of Neuralink and many other companies, stated that future iterations of Neuralink will be able to both receive and generate signal from and within the brain in order to communicate with disconnected muscle structures and extracorporeal devices:
Brain computer interfaces are incredibly promising for many patients in need of significant restoration of neural function. At the same time, it also does not feel like a large stretch of the imagination to assume that a Neuralink-like device will one day replace the screen on a smartphone by tapping directly into the visual cortex while recording control commands from frontal lobe, writing memories to the temporal lobe, and while replicating sensory input in the parietal lobe. While still in very early stages of research — mostly involving genetically altered mice — the implantation and alteration of memories is already under scientific scrutiny:
“Research into memory and efforts to manipulate it have progressed at a rapid pace. A “memory prosthetic” designed to enhance its formation and recall by electrical stimulation of the memory center in the human brain has been developed with support from the [United States] Defense Advanced Research Projects Agency (DARPA). In contrast, memory erasure using what has been nicknamed the Eternal Sunshine drug (zeta inhibitory peptide, or ZIP) — after Eternal Sunshine of the Spotless Mind, a Hollywood movie with a mnemonic theme — is being developed to treat recollections of chronic pain (Martone, 2019).”
Independent of hyperbolic statements on the nearness of immersive BCI-enabled virtual reality experiences, engineering and science are unquestionably marching in unison towards something that resembles a Matrix-like outcome. Hopefully, the dystopian influence of H.R. Giger (and all the bio-slime) will remain figments of imagination.
The Surrender of Self Hidden in a Digital Embrace
Augmented and virtual realities explicitly mediate the user environment; this is their greatest weakness and greatest strength. Where AR/VR bring the user to new worlds, those new worlds are designed by another entity whose goals do not necessarily align with that of the user. It is reasonable to assume that AR/VR immersions are designed by actual people but the power of modern computing means that it’s equally likely that an immersive experience was generated, at least in part, via an autonomous process known as procedural generation which is tailor-made for quickly creating infinite expanses of digitally explorable terrain (Kuo, 2012). Similarly, decisions about information architecture become more pressing in digital environments. Unlike within a traditional screen-based digital experience, the AR/VR creator has 360 degrees of freedom in which to display information and can utilize new modalities of haptic feedback that are being developed in conjunction with new AR/VR hardware formulas. Users retain a fraction of a scrap of agency in the traditional digital computing space but in VR that agency is drastically eroded. These quandaries are not far-off musings on the future of human digital interfaces; these issues are unfolding right now in head mounted displays across the globe. It is also worth noting that contemporary AR and VR devices that are already corroding autonomy are, in comparison to the BCI of the medium-term future, LED-lit abacuses.
Permeable Computing: A Guide for the Future
A framework that preserves the agency of self is needed and, given the state of the art in immersive computation, it is almost too late to implement such a framework. This framework should allow for the user to negotiate the critical difference between the fallibility of myself and the immutability of one’s personified self while preserving fundamental human autonomy. Permeable computing is the foundational notion of this framework. More broadly, permeable computing mandates that any human interface must allow the user to come up for air at any point or — in other words — permeate seamlessly through the digital wall back into unvarnished and unmediated reality. Think of permeable computing like walking through a beaded curtain between two rooms; there’s a clear and substantial division between rooms however passing through that division incurs no cost and can be done at will.
In the following outline, the term elements is used as a catch-all to refer to devices, interfaces, environments, schemas, and/or perceptions that are associated with permeable computing. Levels of permeation refers to the notion that there are varying degrees of perception associated with immersive computing. Unmediated base-level reality and immersive virtual reality are both different levels of permeation despite their conceptual opposition.
- Permeable computing elements must have an accessible on/off switch.
Like the externally-accessible engine kill switch on race cars, all permeable computing devices must have a power button which physical eliminates the passage of electrons through the device. In the event that the device is life-saving, the power button must immediately revert the device back into a level of operation that solely maintains normal biological function and does not influence perception.
- Permeable computing elements allow the user to select their level of permeation without actively controlling or influencing that decision.
Elements must also not seek to passively influence that decision by obscuring permeability; at no point should it be unclear whether the user is in digital space or base-level reality.
- Permeable computing elements must allow for zero-cost transit between available levels of permeation.
Permeating between base reality and a mediated reality should not incur a cost such as a load time, a financial penalty, or a loss of data. While base level reality can fluctuate beyond the user’s control (e.g. if the user is caught in an unanticipated rain shower), levels of permeation beyond the base level must remain under the control of the user at all times even when the user is not actively engaged within that level of permeation.
- Elements must be completely reversible/uninstalled for as little incurred cost and risk as is reasonably possible.
- Elements must not damage the user or influence the user to cause damage to others.
This framework is designing to address the issues of perception and agency highlighted in this writing. Permeable computing as an idea will hopefully spur other creators of neurally-potent devices to engage within the ethical bounds of their own developments while providing a guideline for discussion on the topic. To reiterate: this is important because challenges to human autonomy are happening now. If we wish to continue to benefit from human ingenuity birthed of agency, we must protect our fundamental right to independently interrogate and moderate our own perceptions of reality.
Protect the self to protect the human. Protect the human to protect the future.
Works Cited and Referenced
Apple. (2021, March 12). Apple unveils new iPad Pro with breakthrough LiDAR Scanner and brings trackpad support to iPadOS. Apple Newsroom. https://www.apple.com/newsroom/2020/03/apple-unveils-new-ipad-pro-with-lidar-scanner-and-trackpad-support-in-ipados/
Byford, S. (2021, March 12). Almost a fifth of Facebook employees are now working on VR and AR: Report. https://www.theverge.com/2021/3/12/22326875/facebook-reality-labs-ar-vr-headcount-report.
Calia, M. (2014, May 13). We Are All Living in H.R. Giger’s Nightmare. Wall Street Journal. https://www.wsj.com/articles/BL-SEB-81227
Kennedy, P. R., & Bakay, R. A. (1998). Restoration of neural output from a paralyzed patient by a direct brain connection. Neuroreport, 9(8), 1707–1711. https://doi.org/10.1097/00001756-199806010-00007
Kristiadi, D. P., Udjaja, Y., Supangat, B., Prameswara, R. Y., Warnars, H. L. H. S., Heryadi, Y., & Kusakunniran, W. (2017). The effect of UI, UX and GX on video games. 2017 IEEE International Conference on Cybernetics and Computational Intelligence (CyberneticsCom), 158–163. https://doi.org/10.1109/CYBERNETICSCOM.2017.8311702
Kotler, S. (2018, April 19). Vision Quest. Wired. https://www.wired.com/2002/09/vision/
Kuo, R. (2012, April 20). Why Borderlands 2 Has the Most Stylish Guns in Gaming. WSJ. https://www.wsj.com/articles/BL-SEB-69805
Li, H., & Chen, C.-H. (2021). Research on the Classic Block Interface Design of Mobile Games. 2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE), 626–629. https://doi.org/10.1109/ICCECE51280.2021.9342567
Mac Cárthaigh, S. (2020). The effectiveness of interventions to reduce excessive mobile device usage among adolescents: A systematic review. Neurology, Psychiatry and Brain Research, 35, 29–37. https://doi.org/10.1016/j.npbr.2019.11.002
Martone, R. (2019, August 27). A Successful Artificial Memory Has Been Created. Scientific American. https://www.scientificamerican.com/article/a-successful-artificial-memory-has-been-created/
Schmidt, E. M., McIntosh, J. S., Durelli, L., & Bak, M. J. (1978). Fine control of operantly conditioned firing patterns of cortical neurons. Experimental Neurology, 61(2), 349–369. https://doi.org/10.1016/0014-4886(78)90252-2
Srisawatsakul, C. (2016). Measuring information on mobile devices usage: An entropy-based approach. 2016 International Computer Science and Engineering Conference (ICSEC), 1–6. https://doi.org/10.1109/ICSEC.2016.7859895
Subiyakto, A., Adhiazni, V., Nurmiati, E., Hasanati, N., Sumarsono, S., & Irfan, Moh. (2020). Redesigning User Interface Based On User Experience Using Goal-Directed Design Method. 2020 8th International Conference on Cyber and IT Service Management (CITSM), 1–6. https://doi.org/10.1109/CITSM50537.2020.9268822
Tassi, P. (2021, January 8). ‘Pokémon GO’ Made Nearly $2 Billion During 2020’s Pandemic. Forbes. https://www.forbes.com/sites/paultassi/2021/01/08/pokmon-go-made-nearly-2-billion-during-2020s-pandemic/?sh=7adf89137afc