Alright I made it, I'm https://staging.bsky.app/profile/domenic.me . Will try to pay it forward once whatever mysterious invite-generating process happens.
I then asked a follow-up question about how I could made this work while also grouping the results. It (of course) knew what to do there too. But check out how the conversation ended! I thought I was just doing a polite "thank you", but I got a bonus lesson!
15 minutes reading docs on how "common table expressions" work, and I still can't figure out how or whether they'll solve my problem.
1 minute with ChatGPT, and it says "CTEs will work. Here's the modified code."
If you use unload handlers, then you should check out this potential change in Chrome.
Unload handlers are VERY unreliable but on desktop they also prevent the use of the bfcache on Chrome (and Firefox), while on mobile they don't. We want to align it to how it works on mobile.
https://groups.google.com/a/chromium.org/g/blink-dev/c/oU1yt5cdGH8
🔍 There’s a new HTML tag called `<search>` that represents search semantics. This is good because it’s something that an ARIA landmark role exists for (`role="search"`), but that today can only be expressed with ARIA. A dedicated element allows authors to follow the “don't use ARIA if you can avoid it” rule.
Spec: https://html.spec.whatwg.org/multipage/grouping-content.html#the-search-element
This post, especially the first half, captures some of what I feel about living at such an exciting time. The pace is just *extraordinary*. Remember literally yesterday, when GPT-4 was just a textbox with no access to the external world? https://about.sourcegraph.com/blog/cheating-is-all-you-need
With more prompting, you can get it expanded: https://sharegpt.com/c/T4abi1w#4 Not yet a scene-by-scene recreation... Luke's arc is too far condensed. I'm sure I could tease more out of it, but, time to sleep, I think.
ChatGPT-4 can't stop itself, and wanted to do the whole play. Some departures from canon, but arguably justified for abbreviation? https://sharegpt.com/c/gbADTfO Luke has some great lines, e.g.:
> Can I, a simple farmboy, bear this weight?
> Yet if the cause be just, I shall not shrink
Thanks for reading! Please share this with the AI-curious laypeople in your life, and send me any feedback (especially on how to make it more accessible to them).
But the best part of having the simulators analogy handy, is that it prevents you from getting stuck in the contentless framing wherein LLMs are "just" text predictors.
The question of how intelligent simulacra like ChatGPT can become is not at all settled, and we shouldn't expect there to be fundamental limits. (But there may be practical ones.)
Large language models are simulators, and the different behaviors we see exhibited by ChatGPT and friends can be explained by how simulacra are instantiated and evolve within them.
ChatGPT Is Not a Blurry JPEG of the Web. It's a Simulacrum. https://blog.domenic.me/chatgpt-simulacrum/
In which I try to provide a more accurate analogy for large language models, by summarizing @repligate's simulators thesis.