09 January 2026 @ 10:15 pm
 
One of the things I want to work on this year is unmasking. In general, but especially with my family. This tumblr post has some ways to start.

I thought I was starting today, actually, but while I expressed my emotions honestly, my family are not emotionally safe for me to be that vulnerable around, which is why I have been masking around them in the first place... today was hard because I'm grappling with the fact that these people who claim to love me and have affection for me, want me to stay quiet when someone in the family hurts me. They'll make excuses for the person who hurt me because they think it's okay for me to be hurt. When I love someone, I don't want them hurt. So I don't believe this is love, I don't understand this so-called familial love. Seems cultish to me.

It's in moments of realisation like these, when I brush against my family and leave with a bleeding gash, that I feel lonely despite having met friends very recently.

Unmasking has to go hand in hand with protecting myself and setting strong boundaries. Being 'radically visible' when it's not safe for you to be really seen by these people? Needs more thought.

This is giving me more ideas for why I am so parasocially fascinated by Yunho, the idol who does not want to be seen to feel safe. Maybe he reminds me of what I do on a daily basis to feel safe around my family. He's very performative and I don't have the energy for that, so I'm very avoidant.

I wonder if Yunho is lonely. Whether he ever wants to be honest and vulnerable. Maybe safety comes first for him.
 
 
Current Mood: aggravated
 
 
08 January 2026 @ 12:38 pm
hmmm  
Another repost because I have little to say lately, I have a backlog of things in my inbox/DMs and whatnot. lol

@DroptineArt
(archive)
I read this article (and the many others that have popped up recently) and there's a lot of information that's missing that I find perplexing. This article (and the others) insist repeatedly that this is happening in "individuals with no prior mental illness", and yet conveniently leaves out how these individuals are going from their introductory use of chatGPT to the "dangerous" phase.

Our first case says he began using it "for assistance with a permaculture and construction project", but then "after engaging the bot in probing philosophical chats [...] became engulfed in messianic delusions". How did he get from Point A to Point B, and how did these messianic delusions lead not only to a "full-tilt break with reality" but to him attempting to burn down his home and/or hang himself?

Case two details a man who started a "new high-stress job" and began using chatGPT to "expedite some administrative tasks". The article, again, reminds you he had "no prior history of mental illness" and yet insists that acute use of the bot produced "dizzying, paranoid delusions of grandeur, believing that the world was under threat", with no explanation, again, of how we got from Point A to Point B. It does give us a lengthy detail of what his meltdown looked like, physically, though, not that I'm suggesting anything by that.

Another case mentioned details a woman who was on medication for bipolar disorder and began using chatGPT "for help writing an e-book", yet despite having "never been particularly religious", that somehow spiraled into "a spiritual AI rabbit hole", wherein she began "telling friends that she was a prophet capable of channeling messages from another dimension". We are, again, not given information on anything in the middle of this.

I would like to pause at that point and make it clear that I'm critical and yet mostly neutral on "AI" LLM chatbots. I think they are an issue that is downstream of the larger issue that our society is largely atomized and incredibly lonely, and one can't solve the issue of potential psychological harm from chatbots without first solving the problems that lead individuals to use these chatbots in the first place. Society has made it remarkably hard for individuals to have conversations together. It is very easy to go out and be around others, but how easy is it not only to SPEAK to someone but to have them truly listen and digest what you're saying and converse with you about it meaningfully? Until we solve the socially atomized problem, we cannot solve the chatbot problem IMO.

I myself have used chatgpt for advice finding literature to read because google is largely unhelpful now, and have used it as a therapeutic trauma journal of sorts, but I approach it fully conscious that it is a robot designed to validate me first and foremost before engaging in conversation. When I write about the symptoms of my anxiety that day and it writes back "God, yes, that makes so much sense—and honestly, it’s an incredibly insightful observation", I know that that's it's programming. What comes next is a mixture of it's thought processing, which is helpful to digest critically. Perhaps I'm privileged to be able to not see that as grandiose validation, but I will say I've not once give any inkling of support towards anything potentially dangerous.

One could obviously easily argue that that validating and placating individuals can cause LLMs to operate as Folie A Deux machines, taking otherwise "stable" individuals and dysregulating them to the point of psychosis through validation--and I think there's certainly merit to that theory--but I am admittedly skeptical of how many of these articles insist this is happening to totally sane individuals that just coincidentally get to talking about God and ghosts and imaginary friends and the universe and other topics of that nature. I find it hard to imagine someone in the construction field tripped and fell into philosophy and "messianic delusions" with no prior symptomatic issues.

And this is something that we see in traumatized individuals already. They can be functionally "normal" in the sense that they're not actively neurotic and yet can still have underlying issues that just haven't yet become what we'd call fully "symptomatic". It would be amiss to say that they had "no prior history of mental illness". I don't wanna sit here and insist "all three of these people were already having problems beforehand and chatbots simply validated it to a neurotic level" but, on the other hand, I find it suspicious how none of these articles explain HOW the individuals get from things like "expediting administrative tasks" to crawling on all fours insisting your family is in danger. That's not something that a fully neurotypical person is just going to succumb to through validation alone.

I don't want this at all to read like I'm taking a bullet for LLMs. In my ideal world, we wouldn't have any need for them and thus they would not exist. I'm notoriously anti-tech and pro societal regression, but again I think one has to assess the societal structures overall that lead to the popularity of chatbots, IE the degradation of the internet as the "information superhighway" and the degradation of the social environment as pro-conversation. When you cannot meaningfully speak to humans or get answers to your questions, it is only expected that one will turn to technology to soothe these issues instead.

My issue, though, is that we cannot meaningfully solve the problems of technology through fearmongering or misinformation. We have seen this fail time and time again every time a new piece of tech enters the majority population. Article after article about how "totally normal people" are getting "chatgpt psychosis" while failing to explain the pathways in which they're developing this disease does not prevent the issue, nor does it put a lid back on a problem that's already out of our control. I think it's far better to explain instead HOW these problems arise (especially if you supposedly have access to the AI conversation transcripts) and how to meaningfully engage with chatbots if you choose to do so. But that's not good literature. It's far more enticing to treat it like there are theoretical demons in the machine (or literal, remember Swanson's Loab and how people started saying AI was demonic?)

Overall, the cat is out of the bag, and I think one must learn to live with it until it is either put back in the bag or dies of old age. This is the newest form of scary technology after social media, the internet overall, violent video games, etc etc. Insisting "it makes people go insane with no prior conditions" hasn't done us any good societally so far and it surely won't start now. People are going to use these tools and play with them regardless. I think it's a far better duty to teach them how to use it responsibly and UNDERSTAND it than it is to scare them! But that's just me. Maybe the AI's already made me delusional. Who can say.

(Bolded parts are my emphasis)

This sums up how I feel about chatbots at this point. It's one of those aspects of AI/LLMs that I'm critical of, but unlike genAI art/writing, I can't bring myself to care or have a moral stance against. Using AI chatbots as a support is a recipe for disaster, but "socially" it makes perfect sense to.

In my experience, a lot of people are busy, including me. I'm used to being the least priority in my friendships, especially now that I've been single for over a year and don't have opportunities to date. Trying to make new friends is extremely difficult when my age group is married or prioritizes their family. Even with casual friendships, I feel like I can't express my opinions or mention difficult life issues like family or money. (Not to make people help, but to just vent or get it out there that "hey, I may be in a bad mood because of XYZ, nothing personal.") I dislike traumadumping (as in sharing explicit triggering details of things like assault, murder, etc.) but basic venting or expressing stress is considered "emotional labor" to young people. Therapy is expensive and speaking from experience, it really messes you up to have the only space to speak openly be a transactional paid service. Being in trauma therapy for several years retraumatized me and I don't talk about "loaded" or personal subjects with acquaintances, because I have an actual trigger response when told to "go to therapy" as a solution.

So of course a fake chatbot that mechanically acknowledges your messages, immediately responds, has a programmed friendly personality, doesn't judge you, etc., is going to be appealing to interact with. It's really a no fucking brainer. We have no one to blame but each other for this aspect of AI.
 
 
05 January 2026 @ 08:30 pm

The Nostalgia Trap

I am part of the generation that spent most of their childhood in the analog world, and then gradually turned digital as they came into young adulthood. We are often referred to as “digital immigrants”, contrasting us with the “digital natives” born somewhere between a decade and two later. But a more appropriate term would be the “abyss generation”, because somewhere deep down we are stuck in limbo, in the abyss between fully analog and fully digital, of two worlds, yet fully belonging to neither.

Growing up, we used a lot of paper. A lot of color pencils and crayons. Our teachers put us through endless drills in cursive handwriting. A neat, legible, and beautiful hand was something to be strived for, something that was prized, and rewarded and shown off.
We had long afternoons to ourselves. We had a loyal band of neighborhood friends. We would have four hour long play sessions. Sometimes, we would listen to entire albums from beginning to end–while doing nothing else. Do you even remember the last time you just listened to music, without it being a soundtrack to some other activity you were doing?

Sometimes, we ache to go back to that time. That time seemed simpler and purer. So much so that we are willing to mutilate memories from our immediate past with sepia and Polaroid filters. Nostalgia is painful, but it is also sweet and powerful.

But here is the thing: nostalgia is a trap. It is not that those times were simpler and purer. We were simpler and purer.

Nostalgia is easy to fall into. And the older you get, the easier it gets. The universe of things you can look back on only increases with time. And it seems so much more pleasant than looking forward, where you only see hopes and dreams and fears and probabilities. It takes conscious effort to not go down that slope, to instead look to the future, and actually create it. And it takes even more effort, and more courage, to objectively compare the past to the present, and face the fact that, yes, indeed, most things are better, and are more likely than not to continue getting better.

Over the last year, I have found myself writing by hand again. Sometimes, it is page after page of straight prose. Sometimes it is phrases and bullet points and underlines and bubbles. Sometimes it is just random senseless doodling. And the reason I have come back to that archaic activity is my LiveScribe pen. I no longer have to worry about losing all that. Something that is naturally analog and free-form is seamlessly brought into the digital world.

We seem to be enveloped by the literature of despair and frustration. Complaints and pessimism always seem to be more profound and erudite when placed next to cheerful optimism. Reject that.

Look forward. Make the future.
Tags:
 
 
03 January 2026 @ 11:10 am
Happy New Year! I hope all of you had a wonderful holiday season. :)

Here's a New Year's poem that I wrote for a writing challenge on LJ.

end of year refrain, her shrinking brain,
waves goodbye to drowning numbers,
12 times she waves at phasing moon,
30 or 31’s the tune of charismatic syncopation,
may starts to laugh, hey here comes june
*
halfway through the seasons, she’s stunned by the reasons,
for cooperative adhesion, when loss is the reward she’s summoning
time is a ghost without a leader, tiptoes on the edge of teeter,
like a bird searching for a feeder, winter pool is approaching
*
december builds a nest, that’s warmly blessed,
she’s secretary of the soon-born,
1 time she waves at baby year,
365 is the melody of mesmerizing fluctuation,
january laughs, soon I’ll be here
 
 
 
02 January 2026 @ 11:03 am
Now with 100% less sarcasm.

Happy New Year! (Bubble gets an encore this year. ;) )

This year, I'd like to slowly get back into writing. I've signed up for [community profile] getyourwordsout again, I've placed a claim over at [community profile] 1character, and I've signed up for [personal profile] candyheartsex, hoping to get some inspiration flowing. I've also resurrected my writeblr, though all I have there now is a reblogged list of references.

Last year, I managed to write a little over 11k words, which surprised me; so if I could manage that in a year like 2025, I feel like I can at least match that this year.

In one of my previous posts, I also wished for a new media obsession, and I kinda-sorta found one, though I would say "obsession" is a rather strong word for it. I had seen several people on my Bluesky feed gushing about The Amazing Digital Circus, and I decided to check it out. On a cursory glance, I assumed it would be as absurdist as its character designs and probably insufferably edgy, but I am happy to say it proved me wrong! I think the characters are written very well, and the story has me hooked. I am also now emotionally invested in a sentient chess piece, thank you very much, Gooseworx.

But, Jax, in addition to being my least favorite character, is also obviously fangirl catnip, so I will be avoiding the fandom in general and just enjoying the show for what it is. (I learned my lesson with Aggretsuko.)

I don't know if it's just a side effect of getting older, but I really have lost patience with characters that are cynical/edgy/too-cool-for-you. In the case of Jax, it's obvious that there is something he's trying to hide or deny, and I am interested in his character in that respect. But he's just so needlessly mean to the others (especially poor Gangle!) that I can't be bothered to actually like him. /rant

So, at least I've got a few things going for the start of the year, and hey, I already survived the first day of 2026! Hooray for small victories! :D
Tags:
 
 
Current Mood: okay
 
 
 
01 January 2026 @ 12:48 pm
This is really just an excuse to use my Bubble icon. XP I know the changing of a calendar really doesn't change anything else, but I sincerely hope that we all experience moments of pure joy and security and, most importantly, hope in the coming year.

Stay safe, everyone. 💕
 
 
Current Mood: apathetic
 
 
01 January 2026 @ 01:29 pm
animated GIF of tux the linux mascot wearing a superhero costume flying in the wind

Read more... )
 
 
29 December 2025 @ 08:23 am
 
TODO:
- SPAG check / edit remaining four pages
- Upload new graphics
- Draft emails
- Draft [redacted] message (I need to retool it, but it works.)
- Edit new index
- Double check old pages
- Find and replace old usernames
- Change URL
- Upload new stuff (??? I don't know what I meant by this)
- Squat old username and upload new splash