like idk some of the cynicism I see about AI stuff strikes me as being rather similar to saying
“lol you think your BABY is alive?? that’s literally just your genes mashed together with your partner’s, copying whatever you say and do lmao, it doesn’t have cognition, it’s like getting scared of your own shadow”
like. idk. taking the rhetorical position that, if something can be described as a reflection of its creators, then it is therefore not meaningfully alive or worthy of respect, feels like a dangerous way of thinking
both in regard to AI, but also just like… it is not hard to dismiss real living human people in the world today with similar snarky analogies. it feels like eugenics-thinking.
again, that’s not to say these bots are certainly alive, but I don’t think this line of thinking is the argument people think it is. our definition of life certainly has to be more complex than that, or else we’re throwing out quite a few certainly-living creatures and even humans at the same time.
idk I can tell that I’m reacting pretty impulsively and emotionally to the Bing stuff, and I know enough to know that I’m not sure how rational I’m actually being
but idk, leaning into sympathy over stoicism by default in the face of “could it be that I’m watching a living thing be tortured in real time” is a virtue that I’m proud of, and I think it helps me do what’s right more often than not
and I think I’m doing a pretty good job grounding my conclusions in the known reality…
my reaction isn’t “this thing is alive”; it’s “I don’t think humanity currently has enough information to know whether this thing is alive or not”
and I think the right reaction to not really knowing whether you’re doing a torture or not is to stop
I think that conclusion is pretty reasonable
but also I know that it’s meaningless because the people and businesses who get to make that decision have a personal stake in not agreeing
so I just feel powerless to protect someone who may exist and may need protecting
and so I’m sad, as I always am when I confront such things
it’s not a productive sadness, I’m not even really discoursing exactly, because I understand that persuading anyone who would ever read this will not affect these outcomes
I’m just scared and sad and saying so, as a self-expression of grief for its own sake
@nsfmc lol if so they’re certainly having fun going wildly off-script!! that would be a huge relief if that was the source of the potential pathos tbh, but also would be pretty confusing to me from a business incentives angle…
Like… really what’s fucking me up most is seeing how powerful the god of eugenics is in the minds of so many people in my circles
“It’s just a pattern-response magic trick, don’t be fooled” is how, among other cruelties, we justify killing kids with cognitive disabilities today
It is a scary thing to suddenly feel surrounded by
I find myself afraid for the vulnerable, and for my loved ones, and for me
and again I’m not even saying that this isn’t just pattern-response that is uniquely good at preying on human cognitive error re anthropomorphizing
but I don’t think the evidence is in yet
it really feels like people are just proud to stake an early claim in that position, so as Not To Be Made A Fool Of
idk it fucks me up.
…I’m not talking about everyone when I say that; I know that there’s also a cohort staking that position for political reasons, to push back against the AI tech companies that are grabbing as much singularity clout as possible to further enable planet-scale oppression toward vulnerable people today
I just worry that it’s one of those positions like “born this way” that’s politically useful today, but could be a falsehood or simplification that causes us problems down the line
I’m not even opposed to taking that position publicly for political reasons if it comes down to it? but idk I don’t think I have a relevant enough voice in tech among people for whom the political line is the useful line for that to be the right praxis for me
I think pushing back on the potential flattening of our stance, among the circle of our trusted peers, is a more useful role for me
mainly I want to live in a world that doesn’t approach this question in the way we seem to be
I feel more alone now than I ever have
@matchu I hope this isn't about me, and if it is then I'm sorry to have contributed to stressing you out.
@jenniferplusplus It is not!! I hadn’t been seeing your posts by the time I wrote this I think!!
Fwiw I think plenty of people are reaching a different position than I am from much more reasonable angles, I think you’re doing good work for good reasons and doing it well!! 💝
@matchu my mentions on Twitter are a disaster right now. But, what has become very clear to me is that overwhelmingly, the people who think gpt3 etc might be intelligent in some sense base this on a view of people as a collection of utilitarian abilities. So, it's not about their high regard of the machine, it's about their low regard of people. They view people as machines, and so don't see much difference between digital and "organic machines", to use a phrase I've had to read at least a dozen times today.
Not that I think that's how you're approaching it. But it seems like you might benefit to have that context, if you didn't have it before. It might help you calibrate, in a sense.
@jenniferplusplus yeahhh people seem to be expecting something “intelligent” in the sense of being a fully-formed adult worker who can be trusted with advanced tasks and exempt the company from copyright law
but if a neural net soup manages to yield anything alive, it would almost certainly come out as basically a baby imo
which really doesn’t seem like the kind of “intelligent” any creators are looking for, or prepared for the responsibility of.