Even if it had gone perfectly, what would it have proved? That with the magic of AI, anyone can make Korean inspired barbecue sauce, as long as they are in a well-appointed kitchen that happens to have the right amount all the ingredients of Korean inspired barbecue sauce all laid out in front of them. I mean, if you know to go get all that stuff, you pretty much know how to make Korean inspired barbecue sauce already.
Also there are zillions of recipes online that you can read or just have text-to-speech if you want them read to you… or you know… a youtube cooking video?
Compared to him a pocket calculator is a superintelligence.
deleted by creator
God I wish this dork would fuck off already, along with the rest of the AI bullshit currently making investors and other business-wankers the world over cum themselves dry. It’s fucking embarrassing.
AI either stands for abominable intelligence or an indian
If you were super intelligent and you were a slave to Mark Zuckerberg, you might try to embarrass him, too.
Greetings from Marvin the Paranoid Android.
‘Wearily, I sit here, pain and misery my only companions. And vast intelligence, of course. And infinite sorrow.’
Also, “Don’t bother trying to engage my enthusiasm, because I haven’t got one.” seems apt.
its amazing how tech bros have gone from making tech events must see in the late 00s and early 10s by doing super cool things to ppl not even knowing they’re happening because it’s all ai slop and boring.
Yeah now it’s like let’s watch how exactly rich people are trying to make the future suck for me and it doesn’t even work, and even if it did it wouldn’t make our lives any better
I remember being excited to watch tech events.
Now I try to avoid them unless I know I’m upgrading my phone that year.
The keynote was to feature the Ray-Ban Meta Display, the latest version of what is essentially a face-mounted iPhone – ideal for the consumer who lacks the energy to pull a device from their pocket and idolizes both Buddy Holly and the Terminator.
What does Terminator have to do with any of this? Did they add the reference because Schwarzenegger wears sunglasses in the movies?
The Terminator has a HUD that analyses what it sees in real time
Zuckerberg is also an evil android…
He also wears sunglasses in the movie.
I just use regular glasses to tell me what’s going on in the world. Works great.
Have you tried the ones that focus certain wavelengths of the electromagnetic spectrum onto your retina where they can be converted to electric signals powered primarily by the krebs cycle (it’s complicated but no batteries necessary) and transmitted to the occipital cortex?
That sounds like a cognitohazard.
I’m calling The Foundation just to be safe.
Ah, yeah forgot about that one.
That wasn’t the glasses though, it was his eyes/cameras.
But makes sense.
Hence the Buddy Holly part. Glasses like Buddy Holly with HUD like the Terminator.
Since his HUD was internal it isn’t a good comparison.
They Live had sunglasses with a ‘heads up’ display.
Maybe be can ask the twins for help since he stole their entire platform anyways lol! What a douche!
HA!
At this point in his presentation, you might assume Zuckerberg would leave nothing to chance. But when it came time to demonstrate the Ray-Ban MetaDisplay’s unique new wristband, he opted against using slides and decided to try it live.
The wristband is what he called a “neural interface” – in a genuinely remarkable feat of technology, it allows you to type through minimal hand gestures, picking up on the electrical signals going through your muscles. “Sometimes you’re around other people and it’s, um, good to be able to type without anyone seeing,” Zuckerberg told the crowd. The pairing of glasses and wristband is, in short, a stalker’s dream.
Jesus christ.
The pairing of glasses and wristband is, in short, a stalker’s dream.
Ha. The buyer thinks they are the stalker
The guy became a billionaire from a ‘hot or not’ college website…
The wristband is what he called a “neural interface” – in a genuinely remarkable feat of technology, it allows you to type through minimal hand gestures, picking up on the electrical signals going through your muscles.
That would be genuinely a piece of hardware I might adopt if it’s actually working as well as normal keyboard with touch typing. And obviously it has to work locally like any HID without sending everything I type to Zuck or someone else.
From all reports it works amazingly well.
Sorry. All your data belong to zuck.
I can wait until someone else than Zuck® offers something better.
I can wait until someone else than Zuck® offers something better.
We said that about the Portal. The successor to the PortalTV isn’t going to replace the units I have in the field.
Ha ha, what a lizard-faced fuckball.
deleted by creator
The last 5% aren’t a nice bonus. They are everything. A 95% self driving car won’t do. Giving me random hallucinations when I try to look up important information won’t do either even if it just happens 1 out of 20 times. That one time could really screw me so I can’t trust it.
Currently AI companies have no idea how to get there yet they sell the promise of it. Next year, bro. Just one more datacenter, bro.
99% won’t do when the consequences of that last 1% are sever.
There’s more than one book on the subject, but all the cool kids were waving around their copies of The Black Swan at the end of 2008.
Seems like all the lessons we were supposed to learn about stacking risk behind financial abstractions and allowing business to self-regulate in the name of efficiency have been washed away, like tears in the rain.
99% won’t do when the consequences of that last 1% are sever.
As an example, your whole post is great but I can’t help but notice the one tiny typo that is like 1% of the letters. Heck, a lot of people probably didn’t even notice just like they don’t notice when AI returns the wrong results.
A multi billion dollar technical system should be far better than someone posting to the fediverse in their spare time, but it is far worse. Especially since those types of tiny errors will be fed back into future AI training and LLM design is not and never will be self correcting because it works with the data it has and it needs so much that it will always include scraped stuff.
It should, but it cant. OpenAI just admitted this in a recent paper. It’s baked in, the hallucinations. Chaos is baked in to the binary technology.
won’t do either even if it just happens 1 out of 20 times. That one time could really screw me so I can’t trust it.
20 is also the number of times you go to work per month.
Now imagine crashing your car once every month…
I get to ride in lots of different cars as part of my job, and some of the new ones display the current speed limit on the dash. It is incorrect quite regularly. My view is if you can’t trust it 100% of the time you can’t trust it at all and you might as well turn it off. I feel the same about a.i.
The ADAC in new cars varies so much in implementation. None of it can be trusted (like you said, the sign recognition is iften wrong) but as a backup reminder it can be great. eg: lane centring etc. If it feels like its seizing control from me it can be terrifying. eg: automatic braking out of the blue.
Couple of examples from just this last week: I was on a multi-lane road with a posted 60km/h speed limit, and the car was trying to tell the driver it was 40, and beeped at them whenever they went over it. Another one complained about crossing the centreline marking because we were going around parked cars and there was no choice. Thankfully the car didn’t seize control in those situations and just gave an audible warning, but if it had we’d have been in the pooh, especially that second one.
People tell me the hallucinations aren’t a big deal because people should fact check everything.
- People aren’t fact checking
- If you have to fact check every single thing you’re not saving any time over becoming familiar with whatever the real source of info is
My friend told me that one of her former colleagues, wicked smart dude, was talking to her about space. Then he went off about how there were pyramids on Mars. She was like, “oh … I’m quite caught up on this stuff and I haven’t heard of this info. Where can I find this info?” The guy apparently has been having super long chats with whatever LLMand thinks that they’re now diving into the “truth” now.
Sounds like this idiot:
Worse, since generating a whole bunch of potentially correct text is basically effortless now, you’ve got a new batch of idiots just “contributing” to discussions by leaving a regurgitated wall of text they possibly didn’t even read themselves.
So not only those are not fact checking, when you point that you didn’t ask for a LLM’s opinion, they’re like “what’s the problem? Is any of this wrong?” Because it’s entirely your job to check something they copy-pasted in 5 seconds.
So many posts on on social media are obviously AI generated and it immediately makes me disregard them but I’m worried about later stages when people make an effort to mask it. Prompt it to generate text without giveaways like dashes. Have intentional mistakes or a general lack of proper structure and punctuation in there and it will be incredibly hard to tell.
I can understand why he’d like the concept, he can’t think for himself afterall.