• 80 Posts
  • 1.34K Comments
Joined 3 年前
cake
Cake day: 2023年6月10日

help-circle








  • It was a hell of a surprise when I cut open a peach and the pit was smaller and softer than usual and it split in two in my hands and a little slightly drowsy looking winged ant crawled out of one of the halves and started walking around on the counter. Little guy must have had such a long journey. I don’t know how the hell they got INSIDE the pit.



  • Mr Ellison, we’re very disappointed in you. You have failed to understand that as a member of the human race and a member of our society the rest of us have certain minimum expectations of you that you need to live up to and when you say things like wanting to keep some of us captive you must understand that you’re not hitting those expectations. Think of them like “KPIs.”

    Much as you might like to, you simply cannot continually fail to meet those minimum expectations and expect the rest of us to take you seriously or listen to anything you have to say. It’s a non-starter because it’s obvious that whatever you say is fundamentally tainted by your total lack of respect for humanity so there’s just no need for us consider any of it.

    Despite your failings, you’re presumably born of flesh and blood so that means you can do better. Just try harder and eventually it’ll be like second nature.






  • I don’t feel like LLMs are conscious and I act accordingly as though they aren’t, but I do wonder about the confidence with which you can totally dismiss the notion. Assuming that they are seems like a leap, but since we don’t really know exactly what consciousness is, it seems difficult to rigorously decide upon what does and doesn’t get to be in the category. The usual means by which LLMs are explained not to be conscious, and indeed what I usually say myself, is something like your “they just output probability based on current context” or some variation of “they’re just guessing the next word”, but… is that definitely nothing like what we ourselves do and then call consciousness? Or if indeed that is definitively quite unlike anything we do, does that dissimilarity alone suffice to declare LLMs not conscious? Is ours the only possible example of consciousness, or is the process that drives the behaviour with LLMs possibly just another form or another way of arriving at consciousness? There’s evidently something that triggers an instinctual categorising, most wouldn’t classify a rock as conscious and would find my suggestion that ‘maybe it’s just consciousness in another form than ours’ a pretty weak way to assert that it is, but then again there’s quite a long way between a literal rock and these models running on specific rocks arranged in a particular way and which produce text in a way that’s really similar to the human beings that we all collectively tend to agree are conscious. Is being able to summarise the mechanisms that underpin the behaviour who’s output or manifestation looks like consciousness, enough on it’s own to explain why it definitely isn’t consciousness? Because, what if our endeavours to understand consciousness and understand a biological basis for it in ourselves bear fruit and we can explain deterministically how brains and human consciousness work? In that case, we could, if not totally predict human behaviours deterministically, then at least still give a pretty good and similar summarisation of how we produce those behaviours that look like consciousness. Would we at that point declare that human beings are not conscious either, or would we need a new basis upon which to exclude these current machine approximations of it?

    I always felt that things such as the Chinese Room thought experiment didn’t adequately deal with what I was driving at in the previous paragraph and it seems to me that dismissals of machine consciousness on the grounds that LLMs are just statistical models that don’t know what they are doing are missing a similar point. Are we sure that we ourselves are not mechanistically following complicated rules just as neural networks and LLMs are and that’s simply what the experience of consciousness actually is - an unconscious execution of rulesets? Before the current crop of technology that has renewed interest in these questions, when it all seemed a lot more theoretical and perennially decades off, I was comfortable with this uncomfortable thought. Now that we actually have these impressive models that have people wondering about the topic, I seem to be skewing more skeptical and less generous about ascribing consciousness. Suddenly now the Chinese Room thought experiment as a counter to whether these conscious-looking LLMs are really conscious looks more convincing, but that’s not because of any new or better understanding on my part. I seem to be just goal post shifting when faced with something that does a better job of looking conscious than any technology I’d seen previously.




  • My expertise is with Video, not stills photography so maybe take this with a pinch of salt but since there aren’t many responses yet it might be of value. There’s different types of LUTs but I think odds are you’re probably thinking of “creative LUTs” or “Look LUTs” so I’m speaking with reference to those.

    With video footage shot log, one typically does a transform process first to bring the footage in to colour space appropriate for the project by using a transform function specific to the camera and brand. This is because the LUT will have been designed expecting a specific starting gamma curve and colour space and typically needs to be transformed in to those first in order to avoid unexpected results.

    While RAW is a bit different from Log, there are much the same principles at play. It starts out with no gamma-encoding or display colour space but when you open it in editing software, the program applies a debayer and a default color/tone interpretation. In that sense, the “transform” stage happens in the RAW development settings first and any LUTs would necessarily be happening afterwards.

    Another good reason to keep a LUT at the end of whatever processing chain you have for your image is because, the way the values are remapped by the LUT means that depending on what the LUT does, you could end up with destructive results were you get clipping or oversaturation and it’s best to make sure that that happens last so you can compensate for it ahead of time before you have lost information from this destructive process.