intensely_human

joined 2 years ago
[–] [email protected] 1 points 6 days ago

I’m considering getting a clock for my kitchen

[–] [email protected] 8 points 1 week ago (2 children)

I have no clock in my apartment. To find out what time it is, I need to fire up one of my computers and look. Sometimes it's the kindle

[–] [email protected] 1 points 1 week ago (1 children)

They aren't bullshitting because the training data is based on reality. Reality bleeds through the training data into the model. The model is a reflection of reality.

[–] [email protected] 1 points 1 week ago

Computers are better at logic than brains are. We emulate logic; they do it natively.

It just so happens there's no logical algorithm for "reasoning" a problem through.

[–] [email protected] -1 points 1 week ago (3 children)

Fair, but the same is true of me. I don't actually "reason"; I just have a set of algorithms memorized by which I propose a pattern that seems like it might match the situation, then a different pattern by which I break the situation down into smaller components and then apply patterns to those components. I keep the process up for a while. If I find a "nasty logic error" pattern match at some point in the process, I "know" I've found a "flaw in the argument" or "bug in the design".

But there's no from-first-principles method by which I developed all these patterns; it's just things that have survived the test of time when other patterns have failed me.

I don't think people are underestimating the power of LLMs to think; I just think people are overestimating the power of humans to do anything other than language prediction and sensory pattern prediction.

[–] [email protected] 1 points 1 week ago

richest person alive

[–] [email protected] 5 points 1 week ago

humanoid robot: dances

amazon: shock

humanoid robot: makes coffee

amazon: shock

humanoid robot: delivers package

amazon: friendly shock

[–] [email protected] 1 points 1 week ago

“eyesed” in the eyes of the literal

[–] [email protected] 3 points 1 month ago

Shadowrun is old now, man

[–] [email protected] 5 points 1 month ago

That’s a great question!

[–] [email protected] 9 points 1 month ago (1 children)

Easy for a man to get custody of the kids? What world do you live in?

My friend’s ex repeatedly crashed her car while drunk with their daughter in the car and he had to fight a pretty much non-stop court battle for over a decade to maintain his 50%.

[–] [email protected] 1 points 1 month ago

(they’re both wrong)

 
 

I just spoke with a friend of mine on the phone. He’s got various tech support issues including worrying that his phone might have some malware on it.

I have the skills to help him, but I don’t have the time. Therefore I’m looking for more options.

My question is: are there services that specialize in tech support for elderly people? Maybe coming over to do tasks for them, or a place that he can go with devices?

I recommended he go to the T-Mobile store for questions about his phone’s security, but I know it can be hit or miss with the expertise of retail employees.

Does anyone know of an IT service that specializes in helping elderly/tech-non-savvy individuals?

 

O’Neill cylinder is that big rotating cylinder space station format that uses the spin for artificial gravity.

At higher elevations the gravity will be lower. BMX bikes will be fun too. Make a big jump and you can go across the center and land on the other side, or go into a zero-gee part in the middle, which works out if you’re always inside a curve.

 

I’ve noticed ChatGPT gets less able to do precise reasoning or respond to instructions, the longer the conversation gets.

It felt exactly like working with a student who was getting tired and needed to rest.

Then I had above shower thought. Pretty cool right?

Every few months a new ChatGPT v4 is deployed. It’s got new training data, up through X date. They train up a new model on the new content in the world, including ChatGPT conversations from users who’ve opted into that (or didn’t opt out, can’t remember how it’s presented).

It’s like GPT is “sleeping”, to consolidate “the day’s” knowledge into long term memory. All the data in the current conversation is its short term memory. After handling a certain amount of complexity in one conversation, the coherence of responses breaks down, becomes more habitual and less responsive to nuance. It gets tired and can’t go much further.

 

I just stopped at McDonalds and ordered two orders of hash browns. I expected to get four “patties” of hash browns, but only got two. Each order has one big oval shaped chunk of hash browns.

I asked the guy about it, asked when it had changed, and he said it’s always been that way. I searched google images, and all the pictures show a single chunk to an order.

Does anyone else remember an order of hash browns being two separate pieces?

For me this changed in like the last week because the last time I got an order with two in it was a week or two ago.

 

I asked GPT-4 for a list of the most important threats to human civilization, their likelihood, and why they were considered threats.

GPT's output is also pasted into the comments.

view more: next ›