To accelerate the transition to memory safe programming languages, the US Defense Advanced Research Projects Agency (DARPA) is driving the development of TRACTOR, a programmatic code conversion vehicle.

The term stands for TRanslating All C TO Rust. It’s a DARPA project that aims to develop machine-learning tools that can automate the conversion of legacy C code into Rust.

The reason to do so is memory safety. Memory safety bugs, such buffer overflows, account for the majority of major vulnerabilities in large codebases. And DARPA’s hope is that AI models can help with the programming language translation, in order to make software more secure.

“You can go to any of the LLM websites, start chatting with one of the AI chatbots, and all you need to say is ‘here’s some C code, please translate it to safe idiomatic Rust code,’ cut, paste, and something comes out, and it’s often very good, but not always,” said Dan Wallach, DARPA program manager for TRACTOR, in a statement.

  • @Pyro
    link
    37
    edit-2
    1 month ago

    The problem I see here is that AI may write code that does compile, but has a bunch of logical errors, edge cases or bad architecture. From personal experience, even though AI can write small amounts of good code, it’s bad at understanding big and complex solutions.

    At that point, fixing the AI codebase might take longer than just starting with a competent Rust dev.

    • @[email protected]
      link
      fedilink
      251 month ago

      This is a timely reminder and informative for people who aren’t aware that LLMs don’t actually understand anything. At all.

      That doesn’t mean they’re useless, but yes, if you want an LLM to handle complex input and use it to generate complex output correctly… You may need to reevaluate your choice of tooling or your process.

      • @FizzyOrange
        link
        11 month ago

        This is a timely reminder and informative for people who want to seem smug I guess? Or haven’t really thought about it? … that the word “understand” is definitely not defined precisely or restrictively enough to exclude LLMs.

        By the normal meaning of “understand” they definitely show some level of understanding. I mean, have you used them? I think current state of the art LLMs could actually pass the Turing test against unsophisticated interviewers. How can you say they don’t understand anything?

        Understanding is not a property of the mechanism of intelligence. You can’t just say “it’s only pattern matching” or “it’s only matrix multiplication” therefore it doesn’t understand.

        • @[email protected]
          link
          fedilink
          11 month ago

          I think understanding requires knowledge, and LLMs work with statistics around text, not knowledge.

          With billions of dollars pumped into these companies and enough server farms crawling to make a data hoarder green with envy, they’ve reached a point where it seems like they know things. They’ve incorporated, in some capacity, that some things are true, and some are related. They understand.

          That is not the case. Ask me a thousand times the solution of a math problem, and a thousand times I’ll give you the same solution. Ask an LLM about a math problem, and you’ll get different strategies, different rules, and different answers, often incorrect. Same thing with coding. Back away a little, and you could even apply that to any task with reasoning.

          But you don’t need to go that far: ask an LLM about anything without sufficient resources on the internet to absorb, and it’ll just make things up. Because an LLM has no concept of knowing, it also has no concept of not knowing. It knows anything about as well as your smartphone keyboard. It’s autocomplete on steroids.

          So let’s say they understand—they contain information about—the statistical relations between tokens. That’s not the same as understanding what those tokens actually mean, and the proof of that is how much basic stuff LLMs get wrong, all the time. The information they hold is about the tokens themselves, not about the real world things those tokens represent.

          At this point, I don’t think it’s unreasonable to say that insisting LLMs understand anything is a discussion more related to the meaning of words than to current AI capabilities. In fact, since understanding is more closely associated with knowledge that you can reason with and about, the continuous use of this word in these discussions can actually be harmful by misleading people who don’t know better.

          This is a timely reminder and informative for people who want to seem smug I guess?

          Thanks for assuming good faith, I suppose.

          I mean, have you used them?

          I have, in fact, used multiple popular LLM models currently available, including paid offerings, and spent way too much time hearing both people who know about this subject and people who don’t. I can safely say LLMs don’t understand anything. At all.

          How can you say they don’t understand anything?

          See above.

          • @FizzyOrange
            link
            11 month ago

            I think understanding requires knowledge, and LLMs work with statistics around text, not knowledge.

            You’re already making the assumption that “statistics around text” isn’t knowledge. That’s a very big assumption that you need to show. And before you even do that you need a precise definition of “knowledge”.

            Ask me a thousand times the solution of a math problem, and a thousand times I’ll give you the same solution.

            Sure but only if you are certain of the answer. As soon as you have a little uncertainty that breaks down. Ask an LLM what Obama’s first name is a thousand times and it will give you the same answer.

            Does my daughter not have any knowledge because she can’t do 12*2 reliably 1000 times in a row? Obviously not.

            it’ll just make things up

            Yes that is a big problem, but not related to this discussion. Humans can make things up too, the only difference is they don’t do it all the time like LLMs do. Well actually I should say most humans don’t. I worked with a guy who was very like an LLM. Always an answer but complete bullshit half the time.

            they contain information about—the statistical relations between tokens. That’s not the same as understanding what those tokens actually mean

            Prove it. I assert that it is the same.

            • @[email protected]
              link
              fedilink
              11 month ago

              You’re already making the assumption that “statistics around text” isn’t knowledge. That’s a very big assumption that you need to show.

              And you’re making the assumption that it could be. Why am I the only one who needs to show anything?

              I’m saying that LLMs fail at many basic tasks that any person which could commonly be said to have an understanding of them wouldn’t. You brought up the Turing test as though it was an actual, widely accepted scientific measure of understanding.

              Turing did not explicitly state that the Turing test could be used as a measure of “intelligence”, or any other human quality.

              Nevertheless, the Turing test has been proposed as a measure of a machine’s “ability to think” or its “intelligence”. This proposal has received criticism from both philosophers and computer scientists. […] Every element of this assumption has been questioned: the reliability of the interrogator’s judgement, the value of comparing the machine with a human, and the value of comparing only behaviour. Because of these and other considerations, some AI researchers have questioned the relevance of the test to their field.

              Source - Wikipedia.


              Sure but only if you are certain of the answer. As soon as you have a little uncertainty that breaks down.

              What do you mean, “certain of the answer?” It’s math. I apply knowledge, my understanding gained through study, to reason about and solve a problem. Ask me to solve it again, the rules don’t change; I’ll get the same answer. Again, what do you mean?

              Ask an LLM what Obama’s first name is a thousand times and it will give you the same answer.

              Apples to oranges. “What’s Obama’s first name” doesn’t require the same kind of skills as solving a math problem.

              Also, it took me 7 attempts to get ChatGPT to be confidently wrong about Obama’s name:

              It couldn’t even give me the same answer 7 times.

              Does my daughter not have any knowledge because she can’t do 12*2 reliably 1000 times in a row? Obviously not.

              That’s not my argument. If your daughter hasn’t learned multiplication yet, there’s no way she could guess the answer. Once has grown and learned it, though, I bet she’ll be able to answer that reliably. And I fully believe she’ll understand more about our world than any LLM. I hope you do so as well.

              it’ll just make things up

              Yes that is a big problem, but not related to this discussion.

              It’s absolutely related, because as I stated, LLMs have no concept of knowing. Even if there are humans that’ll lie, make things up, spread misinformation—sometimes even on purpose—at least there are also humans who won’t. People who’ll try to find the truth. People that will say, “Actually, I’m not sure. Why don’t we look into it together?”

              LLMs don’t do that, and they fundamentally can’t. Any insurmountable objection to answering questions is a guardrail put in place by their developers, and researchers are already looking into how to subvert those.

              I worked with a guy who was very like an LLM. Always an answer but complete bullshit half the time.

              Sorry to hear that. From experience, I know they can cause a lot of damage, even unintentionally.

              That’s not the same as understanding what those tokens actually mean

              Prove it. I assert that it is the same.

              Very confident assertion, there. Can I ask where’s your proof?

              I see that you also neglected to answer a critical part of my comment, so I’ll just copy and paste it here.

              At this point, I don’t think it’s unreasonable to say that insisting LLMs understand anything is a discussion more related to the meaning of words than to current AI capabilities. In fact, since understanding is more closely associated with knowledge that you can reason with and about, the continuous use of this word in these discussions can actually be harmful by misleading people who don’t know better.

              Any opinion on this?

              • @FizzyOrange
                link
                11 month ago

                And you’re making the assumption that it could be. Why am I the only one who needs to show anything?

                “Could be” is the null hypothesis.

                any person

                Hmm I’m guessing you don’t have children.

                What do you mean, “certain of the answer?” It’s math

                Oh dear. I dunno where to start here… but basically while maths itself is either true or false, our certainty of a mathematical truth is definitely not. Even for the cleverest mathematicians there are now proofs that are too complicated for humans to understand. They have to be checked by machines… then how much do you trust that the machine checker is bug free? Formal verification tools often have bugs.

                Just because something “is math” doesn’t mean we’re certain of it.

                Can I ask where’s your proof?

                I don’t have proof. That’s my point. Your position is no stronger than the opposite position. You just asserted it as fact.

    • @[email protected]
      link
      fedilink
      51 month ago

      It’s unclear that AI is the right tool at all. It’s certainly possible to use some automated conversion libraries, and then have human programmers fill in the gaps.