• 0 Posts
  • 827 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle

  • noustoLinux@lemmy.mlA noticeable difference in kernels?
    link
    fedilink
    English
    arrow-up
    2
    ·
    16 days ago

    Realtime is important on fully fledged workstations where timing is very important. Which is the case for a lot of professional audio workloads. Linux is now another option for people in that space.

    Not sure Linux can run on microcontrollers. Those tend to not be so powerful and run simple OSs if they have any OS at all. Though this might help the embedded world a bit increasing the number of things you can do with things that have full system on chips (like the Raspberry pi).



  • Don’t ignore the responses. If you abuse it too much there is a chance that the api will just block you permanently and is generally seen as not very nice, it does take resources on both ends to process even that response.

    The ratelimit crate is an OK solution for this and simple enough to implement/include in your code but can create a miss-match between your code and the API. If they ever change the limits you will need to adjust your program.

    A proxy solution seems overly complex in terms of infra to setup and maintain so I would avoid that.

    A better solution can depend on the API. Quite often they send back the request quotas you have left either on every request or when you exceed the rate limit. You can build into your client for the API (or create a wrapper if you have not done so already) that understands these values and backs off when the limits are reached or nearly reached.

    Otherwise there are various things you can do depending on the complexity rate limit rules they have. The ratelimit crate is probably good for more complex things but you can just delay all requests for a while if the rate-limiting on the API is quite simple.

    You can also do an exponential backoff algorithm if you are not sure at all what the rules are (basically quickly retry with an exponentially increasing delay until you get a successful response with an upper limit on the delay). This is also a great all round solution for other types of failures to stop your systems from hammering them if they ever encounter a different problem or go down for some reason. Though not the best if you have more info about the time you should be waiting.


  • I disagree. It is more than just a nitpick. Saying black holes suck things in implies that they are doing something different than any other mass. Which they are not. Would you say a star sucks in stuff around it? Or a planet? Or moon? No. That sounds absurd. It makes it sound like blackholes are doing something different to everything else - which is miss-leading at best. They way things are described matter as it paints a very different picture to the layman.



  • By clear receiver it means there is only one function a name can point to. For instance you cannot have:

    struct Foo;
    impl Foo {
        pub fn foo(self) {}
        pub fn foo(&self) {}
    }
    
    error[E0592]: duplicate definitions with name `foo`
     --> src/lib.rs:5:5
      |
    4 |     pub fn foo(self) {}
      |     ---------------- other definition for `foo`
    5 |     pub fn foo(&self) {}
      |     ^^^^^^^^^^^^^^^^^ duplicate definitions for `foo`
    

    Which means it is easy to see what the function requires when you call it. It will either have self or &self and when it is &self it can auto reference the value for you. This is unlike C and C++ where you can overload a function definition with multiple different signatures and bodies and thus the signature is important to know to know which function to actually call.


    Yes rust has a operator for dereferencing as well (the *). This can be used to copy a value out of a reference for simple types the implement Copy at least.


  • That is a bit more expensive and complex. Looks like this is configured with a couple of resistors for 5v from USB which is simple to get and a voltage reg to drop down to 3v3 optionally. Full PD requires a chip and active negotiation for higher voltage levels. Though there are chips that do that it does increase the complexity and cost and soldering skills a bit. Might not be worth it if all you work on is 5v or 3v3.


  • but I do think a sizeable portion of existing C++ devs who don’t want to use rust exist

    That may be true. But out of that pool of people it seemed that very very few wanted to work on the fish project. So it was not helping them much at all. The is a vastly larger pool of people that don’t want to learn C++ and some of those may be willing to pick up rust. It would not take much for that to out number the number of C++ devs that want to work on fish that also don’t want to learn rust. Given there are not a huge amount of contributors that regularly contribute to it according to their announcement blog post.


  • noustoProgrammer HumorIt's not meant to be
    link
    fedilink
    English
    arrow-up
    14
    ·
    29 days ago

    Protip: Don’t write 600 lines of code without ever testing it at all. And by testing I mean anything, manual testing included or even just compiling it or running a linter over it. Do things incrementally and verify things at each step to make sure you are not drifting off course by some faulty assumption you made near the start.



  • TBH I am not a fan of defer. It requires you to remember to use it and to know what things need cleanup code to be called. I have done quite a few code reviews for go code at places I have worked in the past and people were always forgetting to close open file or sockets or other things that needed it resulting in quite a few memory leaks in production code. I got good at spotting these in reviews and even became a bit too proactive in pointing them out by highlighting things that looked like they needed to be closed but did not actually have a close method on them… You just cannot tell from the code alone and need to look up the documentation for everything that has a Open method on it.

    So I much perfer rusts drop semantics or C++s RAII. Once something goes out of scope or otherwise freed it should automatically be cleaned up and not have to make the programmer remember to do so beforehand.

    Now in a language like C which does not and likely wont have drop semantics or RAII then it might make sense as it is better then nothing at all. But I don’t understand that arguments for putting it in C++ at all.


  • Syntax is in a large part what people are used to. Which is trivial to change by just using the thing for a while and getting used to the different syntax. But syntax is only part of a language. The tooling, documentation, error messages, and general feed back are all IMO much nicer in rust than C++. It is also easier to people new to programming or used to other languages to get into than C++ is, even including the syntax into that.

    C++ was one of the first languages I learnt - and now after not using it for years I cannot stand its syntax.


  • From their blog post:

    Finally, subjectively, C++ isn’t drawing in the crowds. We have never had a lot of C++ contributors. Over the 11 years fish used C++, only 17 people have at least 10 commits to the C++ code. We also don’t know a lot of people who would love to work on a C++ codebase in their free time.

    Hard to tank when you don’t have many to begin with. Rust is far nicer to new users to contribute to then old C++ code. Which can be seen in their github - in the last 24 months 16 people have contributed more then 10 commits. Which is during the conversion period - I dont expect that many of those to be C++ contributions. So rust does not seem to have hurt their contributions at all and in fact looks to have helped.


  • First thing I typically do when that happens is update my system and reboot. This is useful for ensuring everything is in a known consistent state and there is no weird runtime issues that happened since you last booted. And it is always good to upgrade before you reboot to ensure you are booting the latest kernel and drivers.

    If that does not help then I would start by closing down steam completely (ensure it is not running in the systray at all). Then launch steam though a terminal and start the game as you normally would. You will hopefully see some logs for the game in the terminal. Though it is very game dependent as to if that will be useful at all. If not I would look online to see if the game logs anything to any other file as some games tend to do their own logging or have a flag you can enable.

    If the game gives you some logs and hopefully an error message you can then see if it is useful to you and if not try googling for that error and the game name. I find this tends to dig up more specific help for games then general searches for terms like wont start or crashes though sometimes those general terms can find a solution as well.

    Note: if you try to launch steam in the terminal and it is already running you wont get any logs at all from it - it basically just forwards things to the main instance or quits as it does not need to do anything. Only the first instance you start will give you and useful logs.





  • For Windows, software compatibility is actually excellent: a lot of 32-bit Windows 95 software still runs perfectly on Windows 11 64 almost 30 years later. Nothing remotely close exists for Linux.

    I would need to see the numbers on that. A lot of software written back then assumed full admin access at all points and I bet there is quite a lot that actually wont run on a modern system anymore.

    What’s worse is that software compiled for the current version of Linux X will not necessarily work for the current version of Linux Y. Linux distros insist that all the software must be compiled for their current releases or provided as source code.

    This is no longer true with things like flatpak. The kernel itself has even more legendary backwards compatibility promises than even windows does. It is mostly the userspace in Linux that is a mess in that regard - but flatpak, and containers in general, fixes that.

    However, savvy readers of this article will notice that Linux offers flatpaks, snaps, and AppImages. I’m not going to write an insightful treatise on their shortcomings, so I’ll just say it bluntly: these are all lightweight virtual machines.

    They are not virtual machines - they do what most Windows apps really do: ship with all the library code they need to run. There is no visualizing the kernel at all - that is closer to what Windows compatibility layer does (not quite, more emulation of kernel APIs like wine - but closer than what flatpak is doing).

    It’s crazy to think that they solve software incompatibility in Linux, they just work around it by making the user allocate and run gobs of binary code, unnecessarily taxing their storage, CPU and RAM. What’s worse, you can just as easily run them under Windows’ WSL. So what’s the point of having Linux installed on your computer in the first place?

    This coming from an OS that ships a whole another OS with its OS just to get a decent terminal?

    Regressions are introduced all the time because Linux developers spend very little to no time checking that their code changes don’t cause regressions or breakages outside of the problems they’re trying to fix or features they’re implementing.

    Remember that time a windows application update crash half the world?

    Linux hasn’t seen any AAA titles for many years now,

    Huh? Most Windows games run just about flawlessly on Linux now or with minor tweaks. The only real big problems here are ones with kernel level anticheat software. See the above comment about that kernel level access to arbitrary programs…

    Countless software tiles in Linux have a huge number of bugs and missing features

    … Really? It is software. It has bugs. All software does. Even that which runs under windows.

    And once you start looking for people to answer your questions, you’ll see the real face of the Linux community.

    I have always found the Linux community to be far more welcoming than any Windows community I saw…


    None of those arguments are good. Half apply to Windows just as much as Linux. And others are woefully outdated that I had to double check the article was not decades old.

    Linux has been ready for the desktop for a long time now. Maybe not every system and every usecase but far more than not. Most users could switch over to it if they wanted to but they don’t not because of a fault of Linux but because Windows is also good enough for their usecase and they have no reason to switch. And their system already came with windows and that is what they know.

    Gaming has come a long way in the past few years to a point where quite a few gamers are starting to switch. And if you look at devices like the steam deck the default option that works well enough is what windows out - or else you would have seen most people installing windows on the decks. People do, but only a tiny fraction of them.

    Far more I feel are actually migrating to Linux on their main systems instead. Largely in part because of what Windows are doing to their latest version - it is all give people a reason to move off the defaults as they are trashing them with the latest releases. Have a slightly old computer? Need to buy a new one or else you cannot upgrade. And if you do upgrade you get ads, recording of everything you do and so much more shit that people are actively getting fed up over.


  • I generally think mocking should be the last resort tool you reach for when testing - not just databases but anything. Test with real things where you can and you build in much fewer assumptions about their behavior. Though I also dislike testing against things that the tests have no control over. Which the author does not go into any detail on at all and is the big pit fall that makes testing against real things a huge pain.

    If possible an in process or at least an temporary in memory substitute should be used over the real thing. For instance if you are using sqlite or (an ORM that supports it) this is trivial. Each test can have its own instance that does not interfere with other tests (which is where flaky tests tend to come from) and everything being local is fast enough for unit tests.

    Lacking that fake implementations are a good bet as well. A good example of that is this goflakes3 package that acts as a in memory implementation of the s3 API in go. Again it is fast, you can spin up isolated instances in your test and it bakes in fewer assumptions about how s3 works in your tests (and if its assumptions are false they can be fixed for everyone where with mocks you have to find and fix every mock you have written).

    Even things like filesystem interactions are easy to test for real on - I never understood peoples obsession with mocking out filesystem calls. If your filesystem is unreliable you have bigger problems to worry about and go fix. Isolation is the only real concern here but that is trivial to achieve by creating a temp file or directory unique to each test case. And the only change needed for your code is to be able to change the prefix for where it looks for the files/directories in question (which is often something you want to support anyway).

    We really need to stop mocking out everything and picking mocks up as the first tool when you want to test something. Isolating a test to only test a single function in isolation of everything else just means you need to write more tests and that the tests don’t end up testing as much as you fail to test the assumptions about interactions between your components. As long as your tests are fast and reliable there is little need to break them down into tiny component bits and mocking things they need to use.

    Use real implementations if you can. Fakes if you cannot control the real implementation and mocks as a last resort.