Show newer

🔍 There’s a new HTML tag called `<search>` that represents search semantics. This is good because it’s something that an ARIA landmark role exists for (`role="search"`), but that today can only be expressed with ARIA. A dedicated element allows authors to follow the “don't use ARIA if you can avoid it” rule.




This post, especially the first half, captures some of what I feel about living at such an exciting time. The pace is just *extraordinary*. Remember literally yesterday, when GPT-4 was just a textbox with no access to the external world?

I've spent the last three days blocking every account that promotes a Tweet to me. This used to work. But now they keep coming, and I'm declaring defeat.

By which I mean, I'm switching to the Firefox-with-uBlock Origin PWA version of Twitter.

With more prompting, you can get it expanded: Not yet a scene-by-scene recreation... Luke's arc is too far condensed. I'm sure I could tease more out of it, but, time to sleep, I think.

Show thread

ChatGPT-4 can't stop itself, and wanted to do the whole play. Some departures from canon, but arguably justified for abbreviation? Luke has some great lines, e.g.:

> Can I, a simple farmboy, bear this weight?
> Yet if the cause be just, I shall not shrink

Show thread

Not quite true to canon, nor a poem, but I enjoyed it all the same. The Shakespearean turns of phrase are well done.

The moment early in a project's lifecycle where a team lead is on Twitter, responding to feedback and giving internal insights, is magical and rare. Huge kudos to @MParakhin for doing that from within Microsoft, for a project as large as Bing. His "Tweets & Replies" TL is gold.

So many people scrolling Instagram in this club... I, the superior human, am scrolling Twitter.

Thanks for reading! Please share this with the AI-curious laypeople in your life, and send me any feedback (especially on how to make it more accessible to them).

Show thread

But the best part of having the simulators analogy handy, is that it prevents you from getting stuck in the contentless framing wherein LLMs are "just" text predictors.

Show thread

The question of how intelligent simulacra like ChatGPT can become is not at all settled, and we shouldn't expect there to be fundamental limits. (But there may be practical ones.)

Show thread

Large language models are simulators, and the different behaviors we see exhibited by ChatGPT and friends can be explained by how simulacra are instantiated and evolve within them.

Show thread

ChatGPT Is Not a Blurry JPEG of the Web. It's a Simulacrum.

In which I try to provide a more accurate analogy for large language models, by summarizing @repligate's simulators thesis.

(I still think it dropped a few words from what I would consider a literal translation? In particular I would translate the last clause more like "Let's *try* to practice "communicating" to someone *properly*". But either way, this was very helpful for getting me unstuck.)

Show thread

The extra layer of control you get via natural-language prompting can be quite useful, comparing LLMs to traditional ML systems:

It seems New Relic will soon stop using an unload event listener. This prevented pages to benefit from the ultra-fast back/forward cache.

They shipped an experimental setting last month:

And they're about to make it the default:

Yeah for faster websites!

I've been trying to get Chrome release notes / articles / etc. to avoid this mistake, but it's an uphill battle. (Similarly for MDN.) The WebKit team shows us it can be done!!

Show thread

The new Safari beta release is great and all, but what I *really* want to congratulate the WebKit team on is their excellent tech writing, and in particular how they avoid the incorrect "ClassName.method" notation for instance methods.

Show older

a Schelling point for those who seek one