The Future of Human Computer Interfaces and How We Work

I'm going to start out with the pile of question's I asked myself as I wrote this.

What do you see when you sit down to work? My guess is a desk pressed against a wall, maybe a few shelves, one, maybe two monitors in front of you, and a keyboard and mouse dominating the work area of your desk. How do you access information? Do you simply Google something and if it's not on the first page give up? How do you get ahold of experts when you have a question? How do you interpret results that you find? How do you store information you've collected? How do you filter information to get exactly what you're looking for?

If your answers where typical at all, I don't think this is a good way to work. I think we, as individuals and as business need to invest more in our work spaces, probably well beyond what most people would even consider. I'm not talking about adding a third monitor or giving everyone an artistic environment. I'm talking about setting up a work environment that's conducive to productivity as instead of merely being the medium on which it takes place, the work environment should actively contribute to finding, accessing, retrieving/storing, consuming, and creating. - where data can be anything from art to documentation.

Finding Information

We've grow accustomed to the all knowing Google answering any request, and really, I don't see anything wrong with that. I even think anyone at Google that may read this will probably share some of this vision. I would very much like to see a world where data searching is context aware. For example, say I'm searching for a data sheet for an old vacuum tube and I have a schematic of an old amplifier open in another tab, I would love if the engine for finding this data saw the context, saw I was searching for that, and changed the label in the schematic into a link to this datasheet. Furthermore, it would be great if it crawled the web and finished finding datasheets and hot linking them, possibly well before I even even got to that page in the first place. Another point is the summery of information and omission of the irrelevant info. Say I were to look up bits in a byte, I don't necessarily need the historical context as to why bits are named bits and bytes, bytes, though even though I didn't search it, presenting that a nyble is 4 bits, and how to tell endinan-ness is more relevent, unless I had recently searched historical information or stated I wanted that explicitly. In my opinion we should be less concerned with finding relevant results and more concerned with discarding the irrelevant, though making the verbose available.

I also think that brining people together, potentially anonymously, based on search and knowledge could be extraordinary valuable. If as I'm looking up 'How to do X' and somebody else is looking up 'How to Do Y, an advanced topic from X' it would be fantastic if we could talk, albeit unobtrusively to the person being requested. It seems to me that searching is desperately lacking a social element and ranking system. How great would a simple reddit like upvote system on any given search be? What if browsers added a comment system that was hosted though some sort of distributed network and tied to each page so people could leave comments on any site regardless of functionality?

I also think the boundaries of physical and digital should be more blurred. I'd love if I could set a book on my desk and search though it for an idea or concept by mere image recognition of the cover, or if it's an unknown book at least being able to digest any pages shown to it explicitly. Say a section was highlighted? It would be great if that were automatically added to a personal journal file of sorts for future reference, especially if related data were automatically associated from online sources, or links made to people who are interested in similar subjects. The digital world doesn't have to be lonely pages indexed like a book, why are we treating it as such? Today each page can point to any other page in a beautiful web of recursively indexed information, where each topic has lines of association spanning such that no two pages are unconnected. Wikipedia sort of has the simplest form of this, but what if we had systems so capable of automatic understanding - not just tagging - of information that any new info could propagate though that web naturally. Social linkage to people in the same graph, even if anonymous, could help connect people that together, due to their very specific knowledge, drive man kind further. I should clarify to, I literally mean a web/graph, possibly in 3D, relating and indexing information, possibly like this 3D representation of wikipedia. (or this one)

Wikivelse

Accessing Data

So, even if all of this data is able to be interconnected, distributed, and hyper-hyperlinked, what good is it if to access it is anything but simple and intuitive, like reading a book. Obviously this overly linked system is something that people would need to get used to. Until the advent of the WWW we've read information linearly, page by page. The web has allowed this tree traversal style of navigation so that any missed topic can be reviewed, but generally this is a system where the tree only build down, to simpler information, from the current node. It seems weird to think about an algebra book where it suddenly references multidimensional calculus, but this is exactly what I'm implying. In my education there was uncountable times I had to learn something because 'it will be used later' with no explanation as to how or why. Linking back up the tree allows for information traversal in both directions, eliminating this problem.

But that's still a bit of the 'finding' side of things, how should the information actually be accessed? I'm not advocating for the destruction of books by any means, but I do think the way we actually view and access this data is due for an upgrade. Sure our screens are progressively getting to be higher resolution and have better color and readability qualities- as OLED is breaking the no white text on black backgrounds rule- but I don't think this is enough. I think we need to actually fundamentally make a new type of viewing device that is more adaptable to different content styles. the 16x9 monitor needs to die, my suggestion, 24x18 (4x3), now hear me out, I'm envisioning a screen that is 2.5 of a normal monitor long, and twice as 'tall', though it's this height were the magic happens, I'm proposing the top half of the monitor be flat while the bottom half has a curve to it approaching about 45 degrees by the bottom lip, while the overall horizontal is still curved. This has the effect of basically the inside of a sphere for the bottom half and a simple curved wall on the top. This, purely anecdotally, seems to make sense to me as it would provide a good encompassing but non distorted view for video or images, while at the bottom proving a more natural angle for reading- when you read from your phone do you want it held in font of your face or lower and tilted? Exactly. Obviously I view this as being a touch screen to allow the navigation of the web of pages as well.

Beyond this I think it's time we rethink our main input devices as well. Today we primarily use three input devices: Mouse, Keyboard, and a touchscreen. I recognize a few use a pen and tablet, but I don't see that catching on anytime soon, though as with any prediction, I may be wrong. I think we should retain a physical keyboard at frequent use terminals/computers as the tactile feedback of switches (not rubber dome) is something that can't be beaten, though I think we do need to rethink the shape, layout, and even way we use a keyboard. Plenty of research has shown we have a better way, and I think it's time we start abandoning what we know to be inferior. I don't want to hammer this too much though. Furthermore, I think we need to standardize a mouse that actually has useful inputs. The Roccat Tyon, with it's shark fin in the middle, and the Logitech Mx Master with the horizontal scroll have both done very innovative things to make something that is truly much better than a three button mouse, but I think combining these ideas into one great, standard mouse could be a game changer. I also think we need to use more of our body- our feet sit there doing nothing, but imagine all the things we could control with two analog pedals.

Another thing is natural interaction without a physical connection: think Leap Motion or Microsoft's Kinect. These systems I think have limited practical use, but given the necessity for depth sensing camera's in some of what I described above it's kind of a 'why not'. Furthermore, obvious motions like twisting a knob in the air to control volume are simply convenient. Also, complex 'digitally analog' controls such as color balance of a picture could be controlled in a way that doesn't distract the user with the actual values or knob positions, instead focusing on the raw creation.

Physical controllers are also something I've debated the practicality of repeatedly, as usually they're single purpose, like a MIDI keyboard. I think these controllers, for those that use them frequently enough to justify it, should not be standardized and the form factor they use should be something where as many choices as possible are available; however, I think a much better universal protocol than MIDI, HID, OSC, etc. needs to be made. I think exposing a raw form of data in the OS that allows the protocol to be redesigned on the fly would allow for more general purpose uses of hardware, like how some people are using midi controllers in Photoshop or WiiMotes for projection mapping. Nothing but good could come from making off label uses more available, particularly for the disabled.

Another point is the idea of biohacking and body augmentation. Examples include implanted RFID tags (which I actually have) and magnets for sensing electromagnetic fields, though I think a lot of biohackers are missing the point. This is the future I see in store.

Another point is the presentation of the data. Ironically, I think text on a screen is kind of a shit method for this. More so, I think a lot of graphs and charts leave a lot to be desired. I don't want to imply AI and machine learning even more, but I think this is authentically a great application of the tech. If graphs could be analyzed and the data aggregated to produce more fitting visualizations on the fly it would be incredible. Having something that could, for example, take two 2D graphs with a common axis and turn it into one 3d graph would be incredible. In computer science it's well understood fact that any data can be represented graphically in some way, I think finding better ways to dynamically link and graph data would be a huge step in the right direction, paralytically if that data could come from multiple sources. It's amazing how many different ways there are of representing the same data too, and this can help expose otherwise non obvious trends.

Retrieving and storing data

The theme of this section is going to be decentralization. Because I think this is a subject that has been covered to death, I'm simply going to link to content that I think gets to the point I'm trying to convey

https://lbry.io/

https://datproject.org/

https://ipfs.io/

One point of contention I have with almost all existing systems is the basis on crypto currency for all storage, I think that public data should be free to store on the network, with encrypted/private data being the only thing that costs money. This has the side effect of promoting using public data, which via hashing can prevent unnecessary storage of duplicated data and the private data can be used to fund the public data overhead costs.

I think lbry is currently the closest to an implementation that I'm describing, but to my knowledge lbry doesn't host private data. I think builiding upon the ideas of lbry, having data stored as base://user/../folder/../contentname would be best, that way duplicate user names are possible, and then users can chose to either bid for that content name or use base://uuid/samecontentname if they don't want to pay, and then any private content can simply be relegated to the uuid system as well. Sharing data could rather easily be based on unix like file permissions. This idea could also relatively easily be integrated with aforementioned anonomized user profiles/chatting when finding data. Having this user account data be stored on the same decentralized network could also allow for many interesting possibilities.

I'd also like to mention the idea of distributed computation here as well, as I think it's relevant both for the sake of compression and encryption of the data. I think that having a system like 'this' - I'm referring to all the ideas up until now- in place should ask for contribution from users in turn for it's use, the obvious ask is to get it to be self sufficient. So if the distributed computational power of all these systems were used for everything I've described until now that should be more than plenty to allow it to function. This does bring up the idea of balanced usage to contribution, I think the easiest solution is to simply use a system of computational debt tied to each user account. If the user is creating more computational debt than the average debt the system can sustain then that user should be handicapped in bandwidth accordingly. This does sort of bring us full circle in 'can I just trade debt with someone, or sell them my computational time' though I don't like this ieda for two reasons: 1, this system needs mealtime computation, like electricity peak hours are worth more and 2, this incentives simply paying for compute time instead of actually contributing computational power to the network like it actually needs. Fortunately, as time goes on the amount of computational power should actually get closer and closer to scaling linearly with the amount of content produced as old content is indexed and linked into the network.

Consuming

I think it's been pretty well discussed throughout this, but a few extra points about consumption of information include the idea of making strictly ethical design decisions. For example we've all seen biased user interfaces where there's something like this:

where the design is actively pursing an agenda. Instead the affirmative action should be stated on the button that triggers it and both actions given equal weight to the user:

With destructive or irreversible actions, such as deletion (not recycling), given a confirmation dialogue.

Beyond that, keeping the design minimal but powerful. I think markdown is a great example of this. User's aren't as dumb as people seem to think, we can, and do, learn the ways to make interaction with the things we use daily faster, so make the 'speed limit' as fast as it can be. Putting a frequently used option into a menu that needs to be clicked at all is much slower than assigning it a keyboard shortcut.

Furthermore, the design should promote health. An example of this is Netflix's 'Are you still watching?' while this was implemented on their end to prevent unnecessary usage of data, it has the side effect of letting a user know they've been on the couch longer than should probably be advised. I'm not advocating for interruptions at every corner, just affirmative action by the user before bombardment with data. I do think as much data as possible should be linked to or aggregated, but don't show me more than what I request plus some surface level information. For something like YouTube this might mean playing a playlist is fine, but don't start playing another 'related' video when that list is over. For something like an interactive data sheet this would mean showing the most relevant info on the first page, then linking into in one way or another (table of contents, drop down selection menu, etc) into the more detailed information, with the option to remember preferences on displayed info in the future. This actually leads well into my next point:

Information overload is increasingly becoming a problem globally. As more and more information is accessible at our fingertips and more advertisements have the opportunity to be beamed via any one of a number of surrounding screens directly into our retinas we need a way to filter it down to levels the human brain can cope with and digest. {{< figure src="{static}/blog/times-square.jpg" caption="Bobby Mikul, Times Square :CC0(https://www.publicdomainpictures.net/en/browse-author.php?a=2185)" >}}

Worse than the effect of information overload on the brain though is how the way we use technology has trained us to think in the same was as the tools we use. Thinking like I am as I write this, associating various subjects and grabbing bits and pieces form my life of experiences is looking to me to be a skill that people don't use as much when using technology. Don't get me wrong, I think as time goes on people are not only getting smarter, but also getting better at this abstract thinking but I think online, when using technology, these skills are just applied less and we resort back to the linear associations between things. I'm saying this completely anecdotally, partially due to a lack of research. (or at least any that I could find. Then again I'm an engineering major, not a psych major, I may just be missing the right words to ask the question) I think we're just now approaching the stage where instead of merely extending our linear thinking capacity- our memory and computation skills- that computers, as tools extending the mind are starting to be able to extend our abstract and relational thinking as well, so that we can do all the amazing things that come from that type of thought. I fully expect this to be a controversial idea, but I'd love to hear why you think otherwise if you do.

Creating

Creation vs consumption is an ever fought battle. How should you spend your time? should it be balanced? should you create more than you consume? Obviously we're hitting hard core life choices and philosophy here. Truth is, I don't care as long as you're (I'm) happy. I love going into YouTube induced comas on a semi-regular basis, but if I didn't also have a creative outlet I think I might explode. Thankfully to me, that creative outlet doesn't need to be something that's classically artistic, but rather anything from programming to making these blog posts works for me. What I will say is for those that chose to create, having the best environment possible, with the best information, tools, and space around them is a huge boon to the output and quality of work. I'm not going to bother finding the evidence because I don't think I need it to support the claim that when we're happier we work better. So, that begs the question, what makes us happy? Look, I'm not about to claim to know the meaning of life here, but I think I can at least point out some relatively obvious things and state what I want in a work environment:

First of all is a low noise floor. This is sort of an odd one, as different noise matters in different ways dependent on its 'musicality' that is it a repeated pattern, frequency (low is less annoying than moderate, but high is much more annoying than low), etc... For example, right now I'm in a room that has a rather loud computer on part of its fans, giving this room a noise floor of about 50db, making it about the same voulme as a large office but because that sound is roughly consistently at 120hz, it's an at lest not unplesent background hum. Sadly, on the rare occasion I have that computer off (Let's not talk about how much time I spend in this space) I can actually almost physically feel the change in atmosphere, and is undoubtedly relaxing.

Next is adequate space for interruptions. While I'm a strong proponent of not eating where you work or consume media as I think it should be an either social or self reflective time, I understand that sometimes it's necessary, and there's nothing worse than not having a flat surface to put your bowl of soup on. More practically though, as mentioned before, it's ideal if the mouse and keyboard aren't in the way of desk space that would otherwise be used for physical craft, note taking, art, etc. So I think three spaces total are ideal: one for primary input devices- today that's a mouse and keyboard -; a second for papers, a main project, etc; and a third that is easily accessible added for the interruptions and side projects in life.

Next is a visually appealing space. Wires dangling over things, pealing paint, unorganized shelves, etc. are obviously off-putting, but I'd go the step further to say they actively interrupt productivity as they stick out and beg to be fixed. It's the standard scenario of not wanting to do homework until the room is clean. But idealy, I'd go a step further. I think a nice minimal design that accentuates useful things is a good start. Adding a bit of tactile flair can go a long way too. I personally don't want art or static words in my work space as again, they just distract. A bit of sound dampening foam on the wall your facing can go a long ways both in the visual and sound department, and it's pretty cheap too. Until my dreams of a monitor utopia come, a good start is just getting rid of the base and using a VESA mount to the wall or back of desk. the flexability in position and extra available desk space goes a long ways, and it's much, much more visually appealing. Rather paradoxically, I do see value in motion in the workspace too. For example, MIT's reactive table or those fancy marble in sand tables can add much needed visual motion to prevent a space from becoming stale. Hell, even a simple fish tank or plant that adds a bit of change with time make's a huge difference.

To round off the environmental side of creation, and arguably most important is lighting. In recent years redlight shifting/ blue light filtering of screens has come into fassion and for good reason, our eyes and brain are tied together into a biological clock, and it turns out, we can manipulate it pretty well based on how we set our lights. But at the same time the light can be a huge problem too. Without going to in depth white≠white≠white≠white... There's cool and warm and natural white sure, but the 'quality' of this light varies a lot too, there's a lot to be said for getting high quality lights that put off a natural spread of sunlight emulating frequencies instead of just peaks that make a white, this actually makes photography way better too.

Next I'd like to talk about the tools used for creation. This is where I really have a hard time deciding about how money should work because on one end I could just say "Everything you use should be free and open source (FOSS)!" like a hippy, but honestly I don't believe that. What I support instead is free to use personally and open source software that takes a cut of any profit made in return for usage at any sufficiently large corporate scale. This would mean your average joe is free to get their hands dirty with professional software and that in turn means they have experience with and would prefer to continue using the same software for anything commercial. It's a win-win for everyone.

As far as how all of these tools should work and interact, well, I think they should all use standardized file formats, even if they have to abuse them a little to do so and that they should all have a common Application Programming Interface (API) for interaction. This would hopefully mean that any extension written for one program would work for another, and any program could talk to another. Currently the world of music software has a little bit of this but it still leaves a lot to be desired. I'd actually like to take it a step further though and ask that all data of any kind use a common enough format that it can be processed with any extension/program written with this API in mind. Imagine if you could use a synthesizer as a static generator for image manipulation, or color management controls as an EQ. Both would and should behave in strange way, and it's this very lack of defined behavior that could lead to interesting art forms. I'd love to see a 'Master' Api that works across all formats and ideas with a common data type that allows for program⟺program, program⟺extension, program⟺hardware, etc. communication even in long, complicated chains, in any nodal connection system: (https://github.com/OpenMusicKontrollers)

PatchMatrix

Potentially this could also plug into the entire OS as well, making it so an image manipulation program's extension could for example modify anything output to the screen in real time, or an audio program could effect the output of anything. For developer's this may even offer more power, making possible things such as interprocess communication (think pipes, like $ls -la | grep *.png) a matter of connecting two nodes, or reading disk information such as activity, space, or even writeback and inode information, this would literally allow any one piece of information to be accessible to any other. This does have obvious permission issues, but unix permissions should already have this under control. If something like this could also be tied into the originally mentioned web searching and socialization web without massive security concerns the potential use cases are as simple as getting color information from an image hosted online to as complicated as remote access or distributed computing.

PsCircle

or even supporting a full programming environment sort of like Luna:

Luna

Wrapping up

In all honestly I'm not exactly sure what everything I just wrote is about. Mostly it's just a brain dump, but hopefully it's an interesting one. To round things off with a bit of a closing note though, I don't actually foresee many of these things being possible, if not simply because they're require so many people to agree on standards, but there is one glimmer of hope, and it's one of proof of uniformity. The terminal. Yes. This terminal:

Terminal

The terminal emulator above is still compatible with the VT220 from 1983 (as are most terminal emulators) yet from it I can do everything I can really think of: browse the web, chat with friends, listen to music, basically anything. I'm not saying we should all stop using chrome (You totally should though) but I think part of the reason so many neck beards and sys admins still use the terminal is you can do so much with it, and because everything uses it as a common interface and it has programming capabilities (or at least bash/zsh/whatever does) you can automate or string together just about anything, exactly as I described above. In fact, I think it would be amazing if all the graphical 3D node based exploration and data flow editing I described above had underlying syntax that could be written directly for the nitty gritty when desired, sort of like aforementioned Luna.

Finally, I'd like to say I understand we don't all get the choice, be it by monetary, physical, or other restrictions to have a 'perfect' work environment. If you live in the city, there will be noise, I get that. Obviously I don't expect everyone to go out and make their own versions of some of the high tech borderline art installations I linked either. I also don't think everyone's down to get an RFID tag in their hand. I just wanted to present what I see as 'the future' weather it comes in 2018 or 20018. I do, however, hope this has inspired you to look at the way you work, the environment you work in, and how you can improve it. Weather it be by switching software, tidying up those cables, or making a bad ass desk, I hope something comes of this post.

ReacTable, Netsukuku (2),


Written the next day,

Yesterday I posed an accidental novella that, while a bit of a mess, made some interesting recommendations for how we change the way we interact with computers. In this post I'd like to actually look at a vision of implementation of these ideas.

One of the heaviest underlying theme's of Part 1 was this idea of a nodal browser and editor. This is where I'm going to be spending about 98% of this post: digging into a potential implementation of this idea.

To begin with, my idea faces a crippling problem: mixed data sources and types. How on a graph could users profiles, web pages, system devices, local files, and data manipulation nodes all coexist? The answer, to me, is they don't have to, at least not all at once. I think the easiest implementation isn't of a 3 dimensional graph, but rather an 2+1 dimensional graph, by this I mean giving a control which selects what's held in the main plane (in the YZ, x=0), and then a choice of a third node source that exists in planes behind this (YZ plane, X<0). The reason I've defined them that way is because I think it makes more sense to navigate in a screen by 'flying' to or away from something than it does to let the vertical axis define the 3rd relation. (This assumes a graph where X is the axis perpendicular to the screen, Z is the axis running parallel to the vertical edge, and Y the axis parallel to the horizontal edge)

Alright, so, that's a bit of a start but still very hard to visualize. What this might mean is that all file-type data- webpages and local files- are represented as a graph only on the YZ, while functions on this data are held in the same plane, but behind this data. Say theres content on example.com that you want to store into a file on your local system: a link could be taken from the site and fed into a functional node that exists in a plane behind this main file navigation one, then a link created from this function into a new file node that's in a branch:

Though by defining dimensions to be able to hold whatever you like this could be incredibly flexible. For example, instead of storing a third data type in a constant plane behind the main one, the depth into the X direction could be set as a log() of the time since file modification, showing the relative age of data visually. Even more interesting though, the relationship in the primary plane wouldn't have to be strict. Up until now I've been implying a file tree like stricture of the data; so files in folders, web pages following the navigation of the site, etc; However, this could instead be based on attributes of the data at hand, sorting audio files all together. For example this could show links between any audio files, which are then clustered by genre, and then even more tightly clustered by artist, with a now surrounding outlined region linking the encompassed nodes to the artist. This, existing in a visually outlined region of the genre and so on. Bringing this back to the concept of having functions in a further back plane, these functions could act on linked sets of data now, allowing for easy processing of data based on various selected attributes.

Say you wanted to apply an EQ profile to any song by X artist that is also instrumental, these two criteria could be filtered for and then linked into that processing function.

All of this would allow for graphs that can vary in complexity and representation based on what is most useful to the user.

To be extra clear though this primary front-most plane wouldn't have to contain files or web pages, it could be aforementioned programs in the front with data types in the back, or system devices on the main plane (keyboard, mouse, gpu, cpu, mem, usb devices, disks...) with user profiles 'behind' this. I'm not sure why you would do that but the point is that you should be able to. Finally, from the navigation of data side is the idea of letting nodes contain graphs in themselves, Luna and Audulus both have this concept implemented pretty well. It can do a pretty good job of abstracting more complicated structures were desired. To make this clear I mean nodes could contain smaller networks of nodes when 'entered' as a way of reducing outward compelxity when only a higher level view is desired. Ie, if all you care is a car drives that's fine, but inside a 'car' data structure you may find nodes storing data with inputs and outputs for things like an engine, transmission, etc. in the engine node you may see inputs and outputs and data storage for things like pistons firing.This makes it so layers of abstraction can literally be represented visually the same way functions can encompass complex behavior when programming. Again, this is not an original idea. Basically any flow based programming language has some aspect of this.

Moving on to how individual nodes would be 'defined' they should have a textual and visual representation, sort of like in Luna. This allows for a scalpel to be taken to the things that require it without 'diving into' nodes to the point of confusion where you get n layers of nodes deep in something like the above mentioned car example. Each node, as well as literally every file, program, and data source/sink on the system should be defined with some sort of common data wrapper which is used to generate the nodes structure, I'm imagining something like this:

Inputs:
Outputs:
metadata:
data:

Where a specific example, a .wav may have a wrapper that looks something like this

Inputs:
    null
Outputs:
    AudioData:
        ChannelL:
            Frequency:
                data.frequency.left
            Amplitude:
                data.amplitude.left
        ChannelR:
            Frequency:
                data.frequency.left
            Amplitude:
                data.amplitude.left
metadata:
    NodeType: "Audio"
    Album: "Only Solutions","Journey","Separate Ways (Worlds Apart)"
    Title: "Separate Ways","World's Apart"
    Artist: "Jounery of Zepplin"
    Lyrics: "..."
    Tags: "Rock", "80's"
    //Begin format dpndt wrapper
    extension: ".wav"
    format: 16
    samplingfrq: 48000
    ...


data:
    ...
    0x0000130 0x0101010101010101c0ff11000108012c
    0x0000140 0x032c2201020001111103ff0100c4001f
    0x0000150 0x01000105010101010001000000000000
    0x0000160 0x010003020504070609080b0ac4ffb500
    0x0000170 0x00100102030304020503040500040100
    0x0000180 0x017d0302040005112112413113066151
    0x0000190 0x220714718132a1912308b14215c1d152
    0x00001a0 0x24f0623382720a0917161918251a2726
    0x00001b0 0x2928342a363538373a39444346454847
    0x00001c0 0x4a495453565558575a59646366656867
    0x00001d0 0x6a697473767578777a79848386858887
    0x00001e0 0x8a899392959497969998a29aa4a3a6a5
    ...

This would initially define one node with an output of type 'AudioData' worth noting, this doesn't actually handle decode, that would be left to another type of node, one that contains a music player most likely. Also, this node would only appear to have one output, but another type of node could be defined like:

Inputs:
    //lamda represent a fuction to be exectued to inorder to build the node, where as something without the lamda is actually a visually represented i/o on the nod
    λinput = getinput()
    //declaring an input that can become any type should be possible. I'm not saying this void syntax is good, but this also purly a hypothetical idea at this point.
    input = void
    //also note the lack of a funcitonal lamda means this would be an attachable input
Outputs:
    //outputs can use vars declared input, I can't think of a situation that would need to be the other way around.
    λtemp = input.getoutputs().totree()
    temp.traverse(2)
    //note the lack of a funcitonal lamda means this would be an attachable input too,
    //though it's varible in size and complexity based on the second level of the output definiton of the input node
metadata:
    NodeName: input.NodeType + "Data Splitter"
data:
    null

So that the data output of each channel could be interacted with directly, say to be piped though an EQ or filter first.

Also, this is actually generic, so it could be used to access the second level of any input, though if someone wanted theres also no reason the header of the wrapper for the wav file couldn't be modified to just remove the original audio data grouping in the first place; however, it would probably need to be regrouped before getting to a decode application. Or the outputs could be defined fictionally meaning the node could be entered and this could just be remapped inside the node visually. Or, even weirder yet this system would let you define your file as separate data structures all together, with one node that functionally one gets it's data from the left channel and another that only got the data from the right. Obviously the options here are infinite but different solutions and approaches are probably better than others, thought that choice is left to the user/developer.

One more example just to get this idea defined:

Inputs:
    λinput = read(/dev/hidraw0)
    input
Outputs:
    λoutput = map(input,keymap)
    output
metadata:
    NodeName: "HID Device" + input.getvendorid()
data:
    keymap:
        //dvorak key map
        "~":"~"
        ...
        'q':"'",
        'w':",",
        ...
        "/":'z'        

Which would be one way of setting up a keyboard and assigining it a Dvorak layout, now, ironically this example probably isn't the best, since that map would make more sense if it were defined as it's own node. Though, to some extent, due to the lamda, it would be since that would make the map fiction be an internal function if you were to 'enter' the node.

This wrapper could extend to literally everything in the system from image files where you have resolution, (R, G, B) @ pos, etc., to the generic wrapper for programs- exposing the virtual memory mapping and data, cpu usage, running status, etc., For example, a node for a text editor process may, when displayed on the 3D view with the system nodes, expose the connection to the input devices, the output of the window to the frame buffer... you get the idea. Also this brings up the idea of contextual i/o

Part of the beauty behind this is there's no reason any data type couldn't be mapped to another. This would mean something meant to control the color profile of a picture may be able to be repourposed into an EQ for music, or a synthesizer used as a source for noise generation in photo editing.

Because I/O can be seen visually by the user, if something like a virus wanted to 'phone home' it would at least temporarily be forced to make a visual connection on screen. Speaking of, connection and node color can carry a lot of visual information for the user. For example an node outputting audio could change in color based on frequency and in saturation based on amplitude. The process doing this could be just be internally connected (not shown unless the node is 'entered') node in of itself.

One of the bigger ideas this network could implement is machine learning and tag based recognition of data to make connections between nodes and their internal data even easier to find. To avoid repeating myself to much, just checkout the Finding Information section of the last post. Furthermore, the methods for distributing the computational workload imposed by this as well as how I think it should be stored are covered to a reasonable extent in the Retrieving and storing data section. (link to parent post)

Finally, I think it's worth mentioning to what extent I think this could be made a reality, or at the very least a decent proof of concept. Linux, by nature of the 'everything is a file' concept, should allow for this, though this even has it's limits I do think it would allow for the vast majority of this functionality given enough work/effort. From a display and rendering perspective I think the Arcan display server is well suited to this concept and makes it a far less ambitious (though far less ambitions than needing to make a new display server and/or OS isn't exactly lowering the bar much). It's also worth noting that as I described in the parent post to this, the old school Linux terminal is in a way a proof of concept to this, as it already supports data flow programming in the form of i/o redirection, (2) like pipes.

Unreal Engine Blueprints, Luna, PureData, VSXu, PSCircle, Xcruiser, vvvv, GNU Radio Comanion, Walrus, Arcan, Durden, Senseye, Netsukuku (2), Patchmarix, Lbry, Dat, IPFS, 3DWikipedia, (2),