floss.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
For people who care about, support, and build Free, Libre, and Open Source Software (FLOSS).

Administered by:

Server stats:

686
active users

alcinnz

Question: What was graphics programming like in early versions of Mac OS or Windows? Before GPUs became mainstays?

Who can tell me?

Having gained an understanding of what computer graphics involves, creating GUIs on 1980s or early 90s CPUs seems like quite the accomplishment!

@alcinnz It wasn't all that difficult tbh. Probably easier than today. We drew pixels into memory areas, and when it was time to display them they were copied to the proper area in the display memory. With the low resolutions and color depths at the time, this wasn't huge operations.

@harald Makes sense!

A framebuffer in RAM would be enough to accelerate most "productivity" apps, especially if its less data than modern screens!

@alcinnz The most tedious part is that there was no compositor. When a window was moved and part of your application window was exposed as a result, you were handed the coordinates and expected to redraw that part of your window, and not anything that was now covered up.

@mathew Ouch! That'd be asking for bugs!

@alcinnz Yep. Developers used to refer to "mouse turds" from dragging things across application windows and the redraw having fencepost errors and the like.

@mathew @alcinnz I'm disappointed we can't still do this. If you've got a graphically-intensive rendering process, it used to be that you could use other windows to cover it up, and make it run faster. Nowadays, it's not clear how to write a program that behaves this way.

@wizzwizz4 @mathew I've heard Apple say Safari includes such features...

@alcinnz @wizzwizz4 I don't know about Safari, but it's possible to get information about the different layers being composited onto the screen in OS X. There's an open source screenshot program called ScreenToLayers that pulls them out into individual layers in a Photoshop file.

github.com/duyquoc/ScreenToLay

GitHubGitHub - duyquoc/ScreenToLayers: macOS app to capture your screen as a layered PSD filemacOS app to capture your screen as a layered PSD file - duyquoc/ScreenToLayers

@alcinnz I can tell you, as it was sufficiently close to the Amiga and Atari ST platforms that many concepts transfer over.

However, the question is very wide-open; did you have something specific you wanted to know about?

@vertigo I can appreciate that...

I suppose more than anything what I'm curious about is text rendering! TrueType was late 80s, & I'm curious how machines of that time managed it! Whilst remaining legible!

@alcinnz
All versions of Kickstart support bitmapped fonts. The fonts are rendered as a monochrome bitmap, with all the characters placed side by side. So, take Topaz/8 font for example. There are 256 characters in the ISO Latin-1 character set, and each glyph was 8 pixels wide. So, the font consists of a (256x8)=2048 pixel wide bitmap, 8 pixels in height. As each character is printed, the blitter routines are used to copy a sub-rectangle out of this bitmap onto the target bitmap. To support this, each font also contains a table of glyph widths, kerning tables, etc. Usually just the glyph widths is sufficient, though.

When Kickstart 2.0 was introduced, it offered outline font support for the first time. It worked by literally pre-rendering the entire font into a dynamically-allocated monochrome bitmap. It would literally build a bitmap font out of your outline font specs when you attempted to open it. On a 7MHz machine w/o floating point coprocessor, this would take a fair bit of time, as you could imagine. However, once open, the outline font was every bit as fast as a normal bitmapped font.

For other OSes like Windows or MacOS, I believe they worked on a character-by-character basis -- that is, opening an outline font was very fast, but it would cache glyphs as you needed them. This is why sometimes on Windows 3.1, you could see "hiccups" when rendering TrueType fonts in MS Word 6, for example.

@vertigo Did the hardware support colour pallets? So we could we have coloured text without slowing down that software?

@vertigo Looking at Apple's TrueType specs, parsing those fontfiles would barely be an issue. But as you say rasterizing them would!

I guess as for text layout, our ambitions grew as hardware capabilities improved...

@alcinnz The Amiga supported up to six bitplanes (back then; AGA-based Amigas support 8 bitplanes), allowing for your choice of 64 or 4096 colors (resp., 256 or 262144 colors for AGA) depending on video mode.

@alcinnz You'll probably wonder how the Amiga pulled off 4096 colors from a 6-bitplane display. When you're ready, let me know. I can explain that too. ;)

@vertigo O.K. I've read up on bitplanes, EGA, CGA, & VGA.

Bitplanes themselves refers to rearranging the bits across different words to shrink the bitwidth. Such that it remains easy to program in CPUs with a larger bitwidth.

@alcinnz Also, extremely easy for hardware accelerators to work with, such as the Amiga's blitter. Can scale equipment designed for monochrome to work with color easily.

@alcinnz
When you ask about Windows, are you asking from the perspective of someone writing an application for the platform, or someone implementing that platform?

For the former, the original graphics API is called "Graphics Device Interface" (GDI) and provides a number of 2D drawing operations that can be implemented either in software or in hardware, depending on the driver.

I believe modern Windows still supports this, so the API docs are still out there.

@alcinnz
I have no direct experience with classic Windows display driver development, but I found this article interesting:
os2museum.com/wp/display-drive

Three exact details are glossed over a little but I found it interesting that at this point it wasn't clear what different display hardware might have in common yet, and so essentially the entire graphics stack was reinvented for each driver.

www.os2museum.comDisplay Drivers, OS/2 and 16-bit Windows | OS/2 Museum

@alcinnz Hello again!

I came across the following today and remembered this old thread:
github.com/PluMGMK/vbesvga.drv

It's a shared-source Windows 3.1 display driver targeting VESA BIOS extensions. My x86 asm is too rusty to get into the weeds of it, but seems to confirm the earlier article's claim of needing to reimplement the entire graphics stack in each driver!

(I'm not entirely sure what its license is since several parts of it are listed as copyright Microsoft)

GitHubGitHub - PluMGMK/vbesvga.drv: Modern Generic SVGA driver for Windows 3.1Modern Generic SVGA driver for Windows 3.1. Contribute to PluMGMK/vbesvga.drv development by creating an account on GitHub.