Mirror is an entirely new concept in programming — just supply function signatures and some input-output examples, and AI does the rest.
So a chatgpt wrapper that compiles a DSL to JavaScript. Ok.
Of course it would output JavaScript. What else?
// Hack the mainframe to skim pennies from ongoing transactions async function addMoneyToMyBankAccount(dollarAmount: number): Promise<"success">
Alright let’s go
almost like a shitty prolog that won’t work half the time!
Doesn’t prolog already “not work half the time”? (Disclaimer: I haven’t used it.)
I don’t mean this in a toxic way, but this is probably the worst idea I have seen yet with Ai in programming. People should use less Ai, and learn more how to program. It’s better in the long term.
https://github.com/AZHenley/Mirror
Is the language and interpretation predictable and exact? If you install a newer version of the Ai, can the exact same code behavior be guaranteed? What’s the benefit over using Ai tools that generate code in a static language, instead leaving it to be interpreted?
People should use less Ai, and learn more how to program
Yes. Once you know how, you can see pitfalls with AI.
Interesting, but I never needed AI for coding. Well, twice, and I had to do changes, but would not use AI to generate code.
Could I do:
signature primes_less_than(x: number) -> [number] example primes_less_than(2) = [] example primes_less_than(10) = [ 2, 3, 5, 7 ] primes_less_than(10582319112759318014901241439012831231539517)
?
I don’t pay for OpenAI, so I can’t try the playground
Ooooh, this oughta be good.