Show newer

If you host a website only for yourself, please consider "Referrer-Policy: no-referrer" (or "same-origin").

Because your are the only one using the site, it is very easy to correlate requests to you personally (of course, this could also be done with ip-addresses and dns ptr, but don't make it to easy, ok?).

(Website=anything, including your own masto instance).

I'm looking at various #haskell libraries for 2D game development.

Now my little grid can resize, move and give the item renderer the ability to react to it's mode.
#reactnative #reanimated

Friendly reminder that Command Line Heroes is a thing, it's absolutely spectacular, and is hosted by the one and only @saronyitbarek.

#CommandLineHeroes #RedHat

redhat.com/en/command-line-her

If you need to jot a quick note down in your web browser put in this url

data:text/html,<html contenteditable>

and it just opens a window with nothing in it that you can type text into!

(courtesy of metafilter)

metafilter.com/189450/Worlds-S

They're focused on "underspecification," a well-known statistical phenomenon that has not been at the center of machine learning analysis (until now).

It's a gnarly concept, and I quickly found myself lost while reading the original paper; thankfully, Will Douglas Heaven did a great breakdown for MIT Tech Review.

www-technologyreview-com.cdn.a

2/

Show thread
Update: Linux 5.9.9 is now working fine in all of my devices. I thus consider it a stable kernel and think it should be fine to use :blobcatuwu:
Show thread

@profoundlynerdy @SuperFloppies @alcinnz
Tokenization is splitting the input into into its semantic pieces, while lexing is labeling those pieces according to the role of the token in the language. In a context free grammar, those could be separate passes, but ain't nobody got time for that now

"How We Started a Web Analytics Alternative to Google Analytics"

thanks to Founder Facts for featuring this interview with me about Plausible!

founderfax.com/how-we-started-

Whenever it finishes lexing any of those cases, it'll yield a value of it's Token type for me or others to pattern match upon. Other characters are labelled Delim, and this lazy token list ends whenever the input text does. UTF-16 decoding is largely skipped as irrelevent to selecting branches.

Another function iterates over tokens, determines they need a comment to separate them, and convert them back to text properly escaped. Numbers store their textual representation for performance.

6/6fin

Show thread

The lexed CSS identifiers with any resolved escape codes are written into the preallocated arena with any escape codes resolved. If it's followed by a open paren "(" it'll parse to a different token. If that name was "url" (either case) it'll check if it's argument isn't quoted, indicating it needs special casing.

Strings also need this escape parsing, and are terminated with the quote they started with.

Names may also be prefixed with @ or #, the latter is more lenient which it'll flag.

5/6?

Show thread

Lexing a number involves looking for an optional sign character, followed by decimal digits with an optional "." in the middle, followed by an optional lower or upper case E and more digits. It'll parse the integer component in blocks of 40 characters before putting off until lazily evaluated, and use the `scientific` module to parse floating point/exponents.

The tricky part of CSS identifier (beyond what can start or continue one) is escape codes: hex, suppress newline, or ignore rules.

4/6?

Show thread

Got some mint condition in-box HDD docking modules for the junk server, but it still doesn't read HDDs. The HDD is fine, the cables are fine. Starting to suspect motherboard issues. Maybe hardware failure, or just plain incompatibility?
I can see the HDD (or at least a populated IDE slot) in BIOS, but it doesn't show up during boot select, not even in Plop.
Installed the HDD directly for now, gonna set up a distro on it and try to debug things from there.

Okay, I'll start off with what I know which is javascript and node, and then port to C once the design is roughly how I want it.

typesetter font.tsv boxes.tsv > typeset.tsv

where font.tsv would contain font metrics, boxes would be rects of x y width height

and the output

glyph-index x y width height

which could be easily transformed to any sort of format you want from there by other means- svg, pdf, ps, html spans, whatever you want. it's just data in a flexible format.

Various tokens are absolutely trivial to lex, like:

* <!-- & -->, for very legacy browsers that otherwise can't cope with CSS.
* Any of ",:;()[]{}"
* $=, *=, ^=, ~=, |=, & (which I don't support) ||.

Lexing whitespace (which is *only* significant in CSS selectors, making it a bit of a nuisance to deal with) & comments are almost as trivial.

It attempts to lex numbers (which may be followed by a % or a name for a "unit") before names in case of ambiguity.

3/5?

Show thread

@alcinnz Comments are also not always trivial, because there may be difference in interpreting opening and closing bracket: e.g. C ignores /* within comment, while OCaml and Common Lisp require the brackets to match.
In say (* A (* B *) C *) everything is a comment.

@alcinnz And for indentation-aware languages you recognize the leading space of a line as a token?
Not the space itself, but the difference from that of the previous line, because there is countable amount of possible indentations, but only a finite amount of tokens.

Show older
FLOSS.social

For people who care about, support, or build Free, Libre, and Open Source Software (FLOSS).