No we don’t say that and I was completely confused until I read the english line.
- 3 Posts
- 360 Comments
I get anxiety just by looking at the awful job someone did filling the title background.
More veggies and less fruit; too much sugar. Update: Sorry for the duplicate, lemmy client glitched.
More veggies and less fruit; too much sugar.
I’m from Europe and I always assumed that America does that, because it’s the cheapest option by far.
wischito Technology@lemmy.world•WhatsApp provides no cryptographic management for group messagesEnglish4·5 days agoIt’s not called Meta data by accident 🤣
“I think Luigi did it and he should be free.”
A perfectly valid opinion, and so are:
- I think Luigi didn’t do it and should be free because of that.
- The person who actually did it should be free, because <insert personal reason/opinion why>
If you’d celebrate the real killer, then arguing that Luigi didn’t do it seems secondary (…)
That’s not really secondary. I you believe that he didn’t do it for whatever reason (I’m not really into the technical details of the case and the evidence or lack thereof) why would you argue that he did it but still go free. Wouldn’t it be more honest and reflect your believes better to say “Luigi didn’t do it and of course should be released and the person who did should also go free because it was justified”
I think many people celebrate the person that actually killed that CEO (no matter if it was Luigi personally or not). That doesn’t really have to conflict with thinking that Luigi didn’t do it. In the first instance he just represents the person that did it because we don’t know who really did it.
Shut that section down and ground the wires. Not really that dangerous. It’s only dangerous if you don’t follow protocol.
That’s not why JS is a big pile of crap. It’s because the language was not thought through at the beginning (I don’t blame the inventors for that) and because of the web it spread like wildfire and only backwards compatible changes could be made. Even if with all your points in mind the language could be way nicer. My guess is that once wasm/wasi is integrated enough to run websites without JS (dom access, etc.) JS will be like Fortran, Cobol and Telefax - not going away any time soon, but practically obsolete.
Maybe the “Office of” part should be dropped 🤣
In theory you can because the second law is actually a statistical and probabilistic thing. Currently it looks like that the laws of physics are time direction independent. So if you play a physics simulation forwards and backwards you couldn’t tell the difference for a small number of particles so it it actually can (and does on very small time scales) happen that entropy decreases.
wischito Technology@lemmy.world•AI models routinely lie when honesty conflicts with their goalsEnglish3·10 days ago“Amazingly” fast for bio-chemistry, but insanely slow compared to electrical signals, chips and computers. But to be fair the energy usage really is almost magic.
wischito Technology@lemmy.world•AI models routinely lie when honesty conflicts with their goalsEnglish2·11 days agoBut by that definition passing the Turing test might be the same as super human intelligence. There are things that humans can do, but computers can’t. But there is nothing a computer can do but still be slower than humans. That’s actually because our biological brains are insanely slow compared to computers. So once a computer is better or as accurate as a human it’s almost instantly superhuman at that task because of its speed. So if we have something that’s as smart as humans (which is practically implied because it’s indistinguishable) we would have super human intelligence, because it’s as smart as humans but (numbers made up) can do 10 days of cognitive human work in just 10 minutes.
wischito Technology@lemmy.world•AI models routinely lie when honesty conflicts with their goalsEnglish111·11 days agoAI isn’t even trained to mimic human social behavior. Current models are all trained by example so they produce output that would score high in their training process. We don’t even know (and it’s likely not even expressable in language) what their goals are but (anthropomorphised) are probably more like “Answer something that humans that designed and oversaw the training process would approve of”
wischito Technology@lemmy.world•AI models routinely lie when honesty conflicts with their goalsEnglish111·11 days agoTo be fair the Turing test is a moving goal post, because if you know that such systems exist you’d probe them differently. I’m pretty sure that even the first public GPT release would have fooled Alan Turing personally, so I think it’s fair to say that this systems passed the test at least since that point.
wischito Technology@lemmy.world•AI models routinely lie when honesty conflicts with their goalsEnglish24·11 days agoWe don’t know how to train them “truthful” or make that part of their goal(s). Almost every AI we train, is trained by example, so we often don’t even know what the goal is because it’s implied in the training. In a way AI “goals” are pretty fuzzy because of the complexity. A tiny bit like in real nervous systems where you can’t just state in language what the “goals” of a person or animal are.
wischitoHacker News@lemmy.bestiver.se•New ChatGPT Models Seem to Leave Watermarks on TextEnglish141·22 days agoNon breaking spaces and zero width spaces are not watermarks. They are used all the time in professional typesetting and it’s actually a good thing that GPT models can do that now.
But of course they can be a tell if you write your work email like a professional type setter but to be fair there are lot of other tells too in GPT outputs.
PS: In fact there are even keyboards (but they are rare) for example for German E1 extension (https://de.wikipedia.org/wiki/E1_(Tastaturbelegung)) that can even type those characters. They are used to prevent unwanted line-breaks, for example between numbers and there units or to allow for hyphenation in long words.
Probably because a lot or them (especially _iel) use
literalletteral translations, nobody in their right mind would use in everyday conversations. Like in this post with “michmichs”.