Yesterday I briefly described the sheer ammount of effort that goes into I/O. Today I want to get a little more detailed and describe what characterises different mediums from the computer's perspective! A little about how they work.

Specifically I'll discuss what characterises graphics, audio, text, & persistant storage!

In short: graphics by sheer data generation, audio by timing, text by iteration & concatenation, persistant storage by parsing.


Graphics is defined by the sheer ammount of data needed to be generated! On my machine that's 1,366*768*4= 4.2m bytes (or rather 3.1m, the 4th "colour channel" is used for compute) 60+ times a second. Though a vast majority of those bytes don't change that rapidly at all.

Most computergraphics algorithms boil down to the techniques of "interpolation" (filling gaps over space or time), "matrix transforms" (repositioning), & "iconography" (how to depict things).


"Scanline algorithms" where you compute an image a row at a time used to be more popular back when computers were barely powerful enough to generate a live image, but they're still used mostly in "vector graphics" (describing shapes to show onscreen) for hittesting/cropping & filling.

RAM is extremely useful here to allow painting over your existing canvas!

The 2D nature is also characteristic of graphics, at which point windowmanagers strongly associate them with mice whilst multiplexing!

P.S. Computer graphics is what I explored by studying Rasterific & JuicyPixels. Its what I am exploring by hypothesizing about how I'd design a PostScript CPU.


Audio is defined by its precise timing requirements! Our ears are painfully good at detecting gaps in audio data, & things don't sound right if the timing's off.

So the code providing audio to the soundcard most have strict performance guarantees to the extent they practically can't use the rest of the operating system! Which is usually solved by using an atomic ringbuffer to connect a "realtime" thread to a "buffering" thread.

Most audio are tweaked variations of a recording.


Text is represented as bytearrays where every value each (few) bytes can hold is associated with some sort of symbol ("character"/"char") used when writing text in any supported language. Text encodes data in a very information-dense & versatile form from a human's perspective.

Operating on strings almost entirely involves iterations & concatenation. With conversion to/from numbers (repeated multiply-add or divide with remainder).

Textviewing hardware is all software now for good reason!


Internationalization is vital in text I/O, which historically we weren't great at. Early computers were limited to output only English, but now we have Unicode! With room to play with defining missing alphabets!

Mapping lookups, domain-specific interpretors, & fancy text rendering are used to address this to the extent they're more characteristic of text I/O code than concatenation & iteration!

We still maintain backwards compatibility with English-only code, because why not?


Persistant storage is mostly a large stack of parsers, though multiplexing & access control can get interesting... We both want & don't want software to share saved data! Traditionally this multiplexing was made very visible using "filemanagers", though smartphones are hiding this (I don't like).

Multiplexing gets interesting when optimizing concurrent access with mutual exclusion!

Very often we layer an optimizing (SQL or shell) interpretor ontop. SQL is a very talented optimizer!

7/7 Fin!

Hm. Characters can encode phonetics or syllabus…

@RyunoKi Yeah, hard to define the term "char". Its necessarily quite a fuzzy term...

Given modifiers (diacritics), even more so :-S

Sign in to participate in the conversation

For people who care about, support, or build Free, Libre, and Open Source Software (FLOSS).