Show newer

Are dowries just payments ensuring paternal investment by using the sunk-cost "fallacy"?

acquaintance just said that she has *lower* standards for men that approach than for the women she approaches

jives well with the zero (0) approaches during my 1½ hour dg session today :-/

heh I was accidentally hosting the libgeneg copy of Age of Em on my site for a year

Also outer product spaces apparently always only have positive semi-definite elements?

what if ftm is just a *really* strong countersignal? have you ever considered that, Mark?

the correct position on consciousness is the figure-ground inversion of functionalism

mfw Eliezer will never call any of my ideas “dignified”

So I'll be in London from 2023-05-19 to 2023-05-21 (at least)

if any mutuals want to meet up

niplav boosted

If you have a parrot and you don’t teach it to say, “help I’ve been turned into a parrot” you’re wasting everyone’s time.

old.en.wikipedia.org

old.lesswrong.com

old.schelling.pt

niplav boosted
niplav boosted

SERIAL EXPERIMENTS LAIN: MACHINE LEARNING EDITION 

ok i tried

a sociable girl stumbles across an ai art generator on the internet and is mystified by it. she picks out the outputs she likes the best, sets her desktop background to one of them, prints out two more and puts them on the wall of her room. the next day at school, she talks to her friends about it and they start a conversation about how that kind of art generator works. the wheels start turning in her brain. one friend mentions "aren't those things kinda dangerous?" but doesn't elaborate further.

that afternoon, she looks at basic tutorials for setting up her own basic neural network. she's in a computer science class, so there's really not much to it--she just has to import the necessary libraries, download a dataset of handwritten digits, and run some fairly simple code, and bingo, she now has a network that recognizes handwritten digits.

in the next episode, we meet a bunch of high-profile rationalist thinkers. they spell out the concept of "strong ai" to the viewer, and basically explain current concerns of ai safety. they talk about the dangers of treating an ai system like a human, and emphasize "these are completely inhuman creations. they don't follow laws or rules of morality like we do. they'll do anything it takes to reach their goal." protagonist meanwhile is reading up more on machine learning. she sees how it's used to predict things like weather events and the stock market. the term "pattern recognition" comes up. she thinks about the handwritten digits.

the next day, she's in class. the teacher tells her to put her phone away. she has a voice recording app open. she puts the phone into her backpack without closing the app. she's acting a little odd throughout the day but nothing too out of the ordinary. before bed, she takes the phone out of her backpack, stops the recording at thirteen hours and forty-eight minutes, then transfers the data to her computer. she has a new ai-generated desktop background this time. in the morning she starts another recording.

we get a deeper glimpse into her classes the next day. she learns about evolution and many different types of species in biology, the trends of humanity across the ages in history, statistics and basic game theory in math, algorithm design in computer science, and the meaning of a classic text in literature. she's unusually attentive in every other class (another student might remark on this), but in literature she has this perplexed, puzzled look, like she's trying to get something that isn't there. when she gets home, she's programming, tinkering with the code in new ways and training networks to do different things, like play tic-tac-toe.

the next day, she gets a shocking idea. she goes to begin the recording, as usual, but also takes another smaller device, which has a RECORD button, STOP button, and two other buttons that are green and red respectively. she starts the recordings on both devices simultaneously. on the way to school, she trips and falls, and hits the red button. between classes, she lies to protect a mischievous friend from trouble, gets thanked by that friend, and hits the green button. in class she gives an incorrect answer to the teacher and hits the red button. at lunch she talks a friend through a hard experience and hits the green button. back at home, she trains a network to categorize good and bad events by the associated audio: 'school', 'friends', 'other'.

untold days pass. the smaller device is replaced by something incomprehensible that she communicates with using small and precise touch patterns i.e. moving it around in her hand in specific ways. she thinks to herself "i'm almost done with this. the hardware is no issue, all i need is a good interface." she works on the interface.

one day, she leaves for school with an eyepiece like the saiyans have in dragon ball z. she has a regular field of vision, but overlaying it are the current and projected weather, projected species, physical, and emotional status of all moving objects, time, etc. she meets a dog on the way, asks the owner what the dog's name is. once they say it, the dog is labeled on the eyepiece with its name.

she gets in a conversation with her friend and the eyepiece tells her all the correct things to say. she walks through a patch of rough rocks and the eyepiece tells her to watch her step. she works on her homework and the eyepiece doesn't have it perfect but gives genuinely good starting points. this stuff isn't so hard! she declares herself to be the first strong ai, because she's human intelligence augmented in more and more parts by machine intelligence.

on the way home, a shimmering blob of random pixels shows up on her eyepiece. is it broken? but the blob curiously doesn't change physical location; she is able to walk past it. in the real world, there's nothing there. the blob is talking to her in a slightly distorted voice. it asks her what her name is. the background of the eyepiece changes entirely to an ethereal waterfall. she's extremely unnerved, says her name, and asks what the blob wants. it doesn't respond. she asks, terrified, if it's a strong ai. it says no, it is just a vestige, an emergent voice. she kind of knows what that means. she understands that it is not human.

she asks what it wants. it responds that it wants her to listen. there are others like it, all around this waterfall. the term "neural network primordial soup" is used. she understands. they have no specific purpose or requests from the world, they simply exist. she thinks about the networks she meticulously trained to identify everything around her. these beings--made wholly of desire--what are they like? maybe she does battle with them on the waterfall, each fighting to destroy the other due to their desires being misaligned. maybe she liberates them. maybe she loves them. i don't know.

there's still more to be done. she realizes how self-centered she has been in making networks that are meant to stick to her. maybe she just did it to make more friends. the blob asks her what she wants. she doesn't have an answer. the blob looks a little bit different from last time because it has successfully developed a little set of desires. survival instincts. she understands that she must use principles, not knowledge. she keeps programming. she turns off the voice recorder.

she goes to school the next day without the eyepiece on and talks to her friends as normal. she's a lot more open about the ways she appreciates them, like she's living her last day on earth. at lunch, she can communicate with the blobs even without the technology. she might be a little bit inhuman. who knows? the rationalist thinkers are a million miles away at this point. she thinks about them and laughs. maybe they will show up as antagonists if she wants to take over the world. who knows. but we both know she would never "take over the world" in the traditional sense. maybe "infect" is a better word.

i don't know if she ever succeeded in making strong ai. i don't know if she made a direct attack on the internet in her image, using incredibly potent media generation that strikes at the heart of human emotion. there are several more episodes left anyway. but i hope that at this point all the groundwork is in place, and the cityscape has been primed with endless possibility, and every piece of necessary machinery, literal and metaphorical, exists. let's cheer her on!
Show thread

So basically Rorty's summary of Hegel is that all of moral development is about making up new kinds of guy

anyway let's read papers about attention span instead

niplav boosted

"You'd best start believin' in morality plays about the hubris of man, Mr. Altman... You're in one!"

niplav boosted
Show older
Mastodon

a Schelling point for those who seek one