🔍 There’s a new HTML tag called `<search>` that represents search semantics. This is good because it’s something that an ARIA landmark role exists for (`role="search"`), but that today can only be expressed with ARIA. A dedicated element allows authors to follow the “don't use ARIA if you can avoid it” rule.
Spec: https://html.spec.whatwg.org/multipage/grouping-content.html#the-search-element
This post, especially the first half, captures some of what I feel about living at such an exciting time. The pace is just *extraordinary*. Remember literally yesterday, when GPT-4 was just a textbox with no access to the external world? https://about.sourcegraph.com/blog/cheating-is-all-you-need
With more prompting, you can get it expanded: https://sharegpt.com/c/T4abi1w#4 Not yet a scene-by-scene recreation... Luke's arc is too far condensed. I'm sure I could tease more out of it, but, time to sleep, I think.
ChatGPT-4 can't stop itself, and wanted to do the whole play. Some departures from canon, but arguably justified for abbreviation? https://sharegpt.com/c/gbADTfO Luke has some great lines, e.g.:
> Can I, a simple farmboy, bear this weight?
> Yet if the cause be just, I shall not shrink
Thanks for reading! Please share this with the AI-curious laypeople in your life, and send me any feedback (especially on how to make it more accessible to them).
But the best part of having the simulators analogy handy, is that it prevents you from getting stuck in the contentless framing wherein LLMs are "just" text predictors.
The question of how intelligent simulacra like ChatGPT can become is not at all settled, and we shouldn't expect there to be fundamental limits. (But there may be practical ones.)
Large language models are simulators, and the different behaviors we see exhibited by ChatGPT and friends can be explained by how simulacra are instantiated and evolve within them.
ChatGPT Is Not a Blurry JPEG of the Web. It's a Simulacrum. https://blog.domenic.me/chatgpt-simulacrum/
In which I try to provide a more accurate analogy for large language models, by summarizing @repligate's simulators thesis.
(I still think it dropped a few words from what I would consider a literal translation? In particular I would translate the last clause more like "Let's *try* to practice "communicating" to someone *properly*". But either way, this was very helpful for getting me unstuck.)
It seems New Relic will soon stop using an unload event listener. This prevented pages to benefit from the ultra-fast back/forward cache.
They shipped an experimental setting last month: https://docs.newrelic.com/docs/release-notes/new-relic-browser-release-notes/browser-agent-release-notes/browser-agent-v1222/
And they're about to make it the default: https://github.com/newrelic/newrelic-browser-agent/pull/401
Yeah for faster websites!
I've been trying to get Chrome release notes / articles / etc. to avoid this mistake, but it's an uphill battle. (Similarly for MDN.) The WebKit team shows us it can be done!!
This is some real Susan Calvin robot psychologist shit https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation