I finished reading World Wide Waste by Gerry McGovern. I'd consider it essential reading for anyone working with computers!

gerrymcgovern.com/books/world-

It's well cited (though I still need to check those citations) & uses maths effectively to make it's point.

That computers + (surveillance) capitalism is actually worse for the environment than the predigital era. That we can and must move slow and fix things, and fund that vital work directly.

Follow

Don't get me wrong, computers can absolutely help us regain our environmental efficiency. They just *aren't*.

Not as long as we're:
* constantly syncing everything to the cloud,
* expecting same-hour delivery,
* funding our clickbait via surveillance advertising,
* buying a new phone every year,
* using AIs because they're cool rather than useful,
* running bloated software & webpages,
* buying into "big data"
* etc

Computing is environmentally cheap, but it rapidly adds up!

@alcinnz @zensaiyuki I see you talking about “sending everything to the cloud” and couldn’t agree more.

If you are versed on that matter, you may be interested in @hergertme’s Bonsai project.

A bit stale at the moment since Christian is working on a lot of very nice things, but definitely worth having a look at

gitlab.gnome.org/chergert/bons

@alcinnz
"Computing is environmentally cheap (...)"

Computers are not though unfortunately, their production is an often overlooked massive energy expense that often exceeds the running energy consumption of their entire lifespan.

There is probably a better way to deal with this through repair and reuse, but either way computers are highly environmentally problematic even before they got to compute anything. :/

This article has some good info: solar.lowtechmagazine.com/2009

@unicorn Yeah, there's a good reason I listed: "buying a new smartphone every year". I just couldn't fit *why* in my toot.

@alcinnz
You can add TVs and internet boxes to the list.
5G will bring their useless internet of shit devices and antennes.
I've heard there are 30 years of rare-earth metal resources left...
@unicorn

@numahell @unicorn Wouldn't be surprised. Our phones each contain all bit one of those metals...

@alcinnz @numahell @unicorn and then most of us don't bother handing in old electronics for recycling.

I wonder how long it'll take us before we start "mines" in landfills to get rare earth metals from them

@alcinnz Repairability is also super important, which is why I really like what Fairphone and Framework are doing :)

@GustavoM Agreed! I'm very enthusiastic about participation in my local Repair Cafe!

@unicorn @alcinnz

I've been tasked to write an argumentation to our "sustainable development" team about how free software helps lowering the environmental cost of computering, so I'm kinda interested in this discussion
Would you have sources on that (beyond lowtechmagazine) ?

@Iutech @unicorn I quite like World Wide Waste by Gerry McGovern, which top-of-thread was reviewing.

Though that doesn't specifically refer to free software. Though it does advocate for paying directly for software, etc we benefit from online so it's publishers don't need to persue less direct & more wasteful income sources.

@unicorn @alcinnz these things don't exist in a vacuum though. It's true there are significant resources involved in manufacturing a computer (which I suspect is what the "buy a new phone every year" is a shorthand for), and we can and should improve that

But when figuring out what the best option is *right now*, one has to compare the footprint of the computer over its life to other means for accomplishing necessary tasks 1/2

@unicorn @alcinnz I tend to use my computers (and I include my phone) for about 5 years on average. I suspect that the resources consumed to manufacture and transport the information I distribute and consume using those devices, the assistive equipment I would need instead, the utility devices they replace, etc. are not tiny either.

I'd love to see a comparison on that sort of basis, but have not been able to find one 2/2

@alcinnz I do find it interesting that even as it has become cheaper and more efficient than ever to have local storage and computation, we're centralizing it more and more heavily.

But I think Rob Pike had a point when he said he wants no local storage anywhere near him except maybe caches. Managing redundancy and backups is *hard*. And any p2p storage system that a) I would trust and b) mere mortals could be comfortable with, may not be very efficient energy-wise.

@freakazoid I think a lot but not all of this comes down to corporate propaganda.

But there's been a lot of promising developments recently in p2p. We just need to turn it into something useful, and stop focusing exclusively on blockchains!

@alcinnz Well datacenters are something I have a pretty deep understanding of, having worked IN a datacenter for several years and then having worked in SRE at Facebook and Google. I've also done a lot of research into performance-per-watt of CPUs. Efficient "bin-packing" really swamps all other considerations at the end of the day. Power is the vast majority of the cost of a datacenter, so the companies have a big incentive to use as little of it as possible.

@alcinnz Google especially were willing to spend a ton of engineering time to make even tiny improvements in efficiency, whereas at smaller companies the math usually went the other way because their engineer-per-CPU-hour ratio is much higher. "You mean I can spend more on AWS to avoid having to spend an engineer-month implementing this efficiency improvement? Sign me up!"

@alcinnz The other, possibly more important difference is that Google is no longer growing like gangbusters, so they can't just burn money to avoid spending time fixing things like smaller faster growing companies can.

@alcinnz Large datacenters are incredibly efficient, energy-wise, not just because the bigger processors are more efficient but because when you have that much to work with in terms of workload, you can engage in a lot of neat tricks like shutting off unused machines or running batch workloads in the unused capacity. And with the PCIe fabrics the datacenters are deploying now, you can even do the same tricks with individual cards.

@alcinnz I have yet to see anything about datacenter energy consumption that actually compares it to some actual alternative. They always compare it to some other activity, probably cherry-picked to be as shocking as possible.

I totally agree on instant gratification shipping thing, though even there it's not like they're achieving it with a lot more miles. A lot of that is being done with improvements in logistics using... computers!

@alcinnz I think that it would make a lot more sense to focus on the point of actual ecological damage rather than the consumer end of things. In particular, we desperately need a substantial carbon tax. Even if it's revenue neutral, we'd rapidly see what's really important to people.

@freakazoid The book I was citing there took the approach of performing the comparisons on a global rather than individual basis, and computing how many trees we'd need to plant.

The central point being that computing is environmentally cheap but rapidly adds up. That we can and must do better.

Shipping is an interesting case, showing how computers can help us be more efficient. But computing/instant-gratification can also encourage to be less.

@alcinnz I think that looking at specific things that could be improved is exactly the right approach. Otherwise you're in the land of unfalsifiable claims, because we don't actually know what would happen if we just shut off the Internet or any given service.

@freakazoid Absolutely. Efficient computing ultimately comes down to the fuzzy field of usecases.

That's *one* reason I want people to be funding quality work directly, rather than fund clickbait via surveillance advertising.

@freakazoid I'm all for taking every measure we can!

And I'm glad serverfarms are so efficient, but that won't stop me from discouraging their use. It will however encourage me to recommend Microsoft or Google (or my local Catalyst Cloud) clouds over Amazon's.

@alcinnz Well I don't know anything about AWS's efficiency. I only know anything about AWS from a user standpoint.

Don't get me wrong; the centralization makes me really uncomfortable, and I'm happy to spend energy/money/etc to move more control back into people's hands.

@freakazoid According to GreenPeace's analysis only MAGAF's datacenters are that green, and amongst them Amazon lags behind.

@alcinnz Ah, ok. I probably saw that. I remember being quite floored when Greenpeace started saying positive things about Google and Facebook. But it also gave me a lot more respect for them, since they were willing to actually say when a company had made substantial improvements.

There were certainly plenty of cynics at both Facebook and Google, but most of us really believed in actually making the datacenters green.

@alcinnz (The main things that made me not have respect for Greenpeace were their opposition to nuclear power, which in my view was necessary for going carbon-free though I have since changed my mind, but they were still wrong before the economics of solar really changed) and the dishonesty of their "Kleercut" campaign against Kimberly-Clark which talked about them clearcutting "in old-growth forests", when they were only cutting trees they'd planted themselves.

@alcinnz On the blockchain thing, my friends and I have been wishing for SOMETHING like that since at least '00, and not specifically for cryptocurrency (I could not conceive of anything other than Chaumian e-cash or something like Ripple, which is why I wrote Bitcoin's very first obituary back in 2010). At the time, Spread seemed the most interesting. Stellar's consensus protocol and Avalanche both seem pretty similar to Spread.

spread.org/

@alcinnz IPFS seems closest to reaching critical mass to me, and its implementation also seems the most principled. I can't imagine its association with FileCoin hasn't contributed significantly to the amount of attention it's getting. And FileCoin is an attempt to solve the biggest problem with p2p storage, which is that you have to waaay overreplicate when even large nodes can drop off at any time and there's no guarantee it's not the same organization operating 10k nodes.

@freakazoid @alcinnz

I don't understand why friends can't just help each other host for free or in a pool. we do it on feddy, why does it suddenly need to be monetized? fuck that shit man

@freakazoid
Definitely agree on the redistributive carbon tax.

I was also wondering whether, in the meantime, you found some comparison between datacenters and decentralized alternatives.
I've been wondering whether the issue with those efficient hyperscalers is not (besides noise) the fact that, wanting to provide 99.99% uptime, they are ultra redundant (2 to 3 times the data/machines) and hardware gets replaced very regularly.
Plus a small py at home needs no air conditioning.
@alcinnz

@silmathoron @alcinnz From a power consumption standpoint, compute dominates, and Intel's server CPUs are extremely energy efficient. Google's datacenters have 11% overhead for cooling, and Facebook's are lower because they use an open plenum, though all do use a significant amount of water. They usually site them in places that have plenty of water, but not always.

Another factor is that a significant amount of the workload (likely a majority at this point) has moved to GPUs and TPUs, which are far more efficient than CPUs for the right workloads.

There's also the ability to "bin pack" and run batch workloads during off peak times and turn off servers that aren't in use, though I don't know how much that matters. Also tiered storage, with lower redundancy for colder data, and they can turn off hard drives selectively. Facebook was doing this; I don't know if Google does the same.

Power is a huge fraction of these companies' opex, so they have a big incentive to minimize consumption. They are also sensitive to public perception, and they have a lot of employees who care about that sort of thing, so they've gone out of their way to try to be as green as they can. I say "be" and not just "appear" because they are under pretty close scrutiny by Greenpeace and the media and their own idealistic employees, so they can't get away with much greenwashing.

That being said, I think decentralization is still preferable simply because of the control issue, even if it uses far more energy.

@alcinnz "running bloated software & webpages" is a huge contributer to climate change, wish people would realize that.

Someone close to me works for AT&T and they realized they would run out of energy to supply their growing server farms, so they put research and dev into fixing it and they did, for now. We need financial incentive to solve this on a large scale, a tax on shitty software perhaps (like carbon tax? lol

@dcharles525 @alcinnz If, as you say, a lot of energy use is being caused by shitty software, then a carbon tax IS a tax on shitty software.

@alcinnz Can you please help me understand how syncing everything to the cloud is environmentally destructive? I see the big cloud players all taking big steps to minimize their environmental impact, and given that, isn't 1000 racks of storage in a data center backing 100000 people more efficient than spinning up 100000 magnetic platters on home electricity?

@alcinnz Mind you, I'm not saying syncing everything to the cloud is an unmitigated good *AT ALL*. Recognizing what's important and taking full possession of critical bits is the only way to go, but for many people who won't realistically back up ever, at all, having a cloud drive for important docs seems prudent.

@feoh Most people and organizations have a whole lot of useless information sitting around on their harddrives. Most data is junk that never actually get used. My understanding (though not the book's) is that that's not wasteful unless you actually do something with said data, especially on modern filesystems, or buy new harddrives because of it.

So yeah if you are backing up files, take some time to delete unwanted files so they don't use up precious bandwidth or processing time.

@alcinnz Also there is this thing were the companies that make smartphones are making more difficult to repair them. And also make their lifetime shorter.

@alcinnz Go suckless, save the planet :D

Jokes aside, this problem really doesn't have to exist. Back in the day, entire university campuses ran off a single mainframe with dumb terminals.

Even our nice GUIs were implemented in early Windows, for example. This is a problem we developers created by making software slower because "the computer is fast enough".

@alcinnz
I'm trying to do none of what you're listing. Everything on my computer is suckless (dwm, st for terminal, I edit things with nvim, etc.), which greatly reduces its resource usage. I use devices until they unrecoverably break. My previous laptop was used for about a decade (although I didn't get much out of it because I was its fourth user).

I'd love though if being an old-school UNIX hacker wasn't a prerequisite for caring about the issue.

(If you have any info on how to improve...

@alcinnz Oof, now I realise what an old toot I replied to. Sorry about that.

@almaember It's alright, I have it pinned to my profile so this toot periodically gets recirculated.

Yeah, that's the big question! I wouldn't say you have to be an old school UNIX hacker, there's simple enough graphical desktops some of which are also quite gorgeous! Though it all depends on where you draw the line between necessary complexity and bloat...

That said, I'd definitely say the web's the biggest battleground to tackle today...

@alcinnz What I mean is that there's a certain subgroup of developers (mostly those who are old-school anything, probably even the three DOS fans), who care about the issue.

I won't question that you don't have to be one to help, but most other people simply don't care (respect to the exception).

Regarding the web, I think we either go back to web1.0 or throw it out in favour of Gemini.

(I'm invested in this issue since I'm actually writing a somewhat-working web-forum replacement myself.)

@almaember Agreed, at least mostly! I'm building my own JS-free browser engines. And I threw in Gemini support because might-as-well.

Though I do support (most of) CSS because I find it useful internally and it's actually quite straightforward. I am taking (trivial) anti-abuse measures.

And yeah, I know first-hand how little most care about making efficient software! Very difficult to get funding for implementing a more efficient geospatial datamodel...

@alcinnz I know Plan 9 has gained some recent fans. Personally, I was fascinated by it about 20 years ago. At the time, the web pages you found for it were a bit different.

One of the things that struck me then, and I remember still, is how the 9P protocol - the one that exposes everything as a file, far more than UNIX ever did - was *also* intended for what we now consider cloud syncing, except quite different.

I hope you don't mind me going off a little on that tangent.

@alcinnz Seems I discovered this thread only now through a boost.

ANYWAY.

9P works a little like an improved NFS, really - it maps to fairly basic file-system operations such as opening directories and files, seeking, reading, closing. It's not all that magical in and of itself.

But because Plan 9 is a lot more aggressive in treating everything as a file, device drivers or similar are rather indistinguishable from file servers... they *are* file servers in their API to the rest of...

@alcinnz ... user space, but what they do internally, well, that's up to them. So you have e.g. a file server that creates a file that you can read your mouse pointer coordinates from, that kind of thing.

But because the protocol also works remotely, you can also read another machine's mouse pointer coordinates if it exposes this file to you, etc, etc. It's files all the way down.

Anyhow, when it comes to these remote files, there used to be something mentioned in the docs, namely that...

@alcinnz ... the user shouldn't care (in the API usage, at least) if the file they're trying to read is on a local disk, a remote disk, or perhaps some tape archive attached to a remote archive server.

The point is, *this* is the way to sync files with relative efficiency. Write local first in your software. Don't worry about anything else.

Then have some kind of configuration determine whether files are (selectively?) archived on your NAS or whatever. And then...

@alcinnz ... have your NAS config determine if there should be a longer term, offsite backup as well.

The user/application shouldn't know about this. There should be no special code paths for it.

At the same time, local first makes it such that it balances efficiency and other concerns such as recoverability best.

TL;DR cloud syncing was rubbish, and we've had better concepts for 30 years now.

@jens Hmmmm, I see echos of these "network filesystems" in Linux...

The problem here is you can either choose the old unpopular network filesystems Linux provides or you can use GVFS in userspace. Though not all apps (like LibreOffice) supports GVFS so it's effectively very buggy...

@alcinnz None of these synchronize, however, just provide real-time access. The Plan 9 docs were (sort of).

@jens @alcinnz

Was'nt it possible to basically "share the CPU with other hosts"?

@selea @alcinnz I don't recall. But at the time, I was also interested in OpenMOSIX, so I'm a little confused on that front.

@alcinnz With business as usual, computing is an unmitigated disaster. I've written a piece on this (it is the current focus of my research):

wimvanderbauwhede.github.io/ar

Sign in to participate in the conversation
FLOSS.social

For people who care about, support, or build Free, Libre, and Open Source Software (FLOSS).