• 7 Posts
  • 201 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle

  • Well, do we know what the blockers are for Tesla?
    I feel like when I watch videos of FSD on cars, the representation of the world on the screen is rather good.
    Now given this datapoint of me watching maybe 30minutes of video in total, is the issue in:
    a) creating the distance to obstacles in the surroundings from cameras or in:
    b) reading street signs, road markings, stop lights etc, or in:
    c) doing the right thing, given a correct set of data about the surroundings?

    Lidar / Radar / Sonar would only help for a).
    Or is it combination of all of them, and the (relatively) cheap sensor would at least eliminate a), so one could focus on b and c?







  • Because you don’t train your self-hosted LLM.
    As a result you only pay for the electricity of computing your tokens (your request), this can be especially reasonable if the same machine also does local game streaming and or transcoding, and thus already has the requirements to host a LLM.

    If you don’t have rather unreasonable means, your local LLM is just very much more limited in parameters (size), and will not be as good as other, much larger models.

    Privacy, Ethics and personal interest usually are the largest drivers from what I can tell.


  • When asked about Nintendo’s solution for backwards compatibility with Switch games and the GameCube classics available on the system, the developers confirmed these games are actually emulated. (This is similar to what Xbox does with backwards compatibility).

    “It’s a bit of a difficult response, but taking into consideration it’s not just the hardware that’s being used to emulate, I guess you could categorize it as software-based,” Sasaki said of the solution.

    They are (mostly?) talking about Gamecube right?..
    right?

    Or is that the reason for the Switch-Emulator-Witchhunt, they actually “bought” the tech?


  • I doubt that it’ll really have killer features.

    You’ll most likely be able to exchange the 2 Hotend-Toolhead for a Laser-Hotend, it’ll have a heated AMS, it may have a vinyl-cutter head.

    I don’t really think I’d want to Laser on my heated bed, or cut on it either. The Fumes from lasering will impact durability of anything in the printer, without really lots of ventilation it will produce lots of dust (well, ash).
    Cutting on the same head is weird, as a cutter needs to resist a bit or cutting force.

    The dual-nozzle-design is interesting, but I think it’s still vastly inferior to multiple toolheads, with anything over 2 materials there is still cutting required. Depending on how they solved the issue with feeding the two hotends, I’m not sure how there won’t be quite a bit of added complexity for loading the AMS, where you have to think which head needs which filament.
    Using a single extruder gear for both hotends also increases chances and risk of cross-contamination. I’ve never had a printer who didn’t occasionally chew filament.
    Moving the Hotends on linear rails, having a mechanical drop-stopper on the hotend all increase complexity, I’m not sure how bad blob of dooms will get here.

    If they use their touted servo-design actually on the corexy kinematics, that will be interesting, because conventional wisdom says it doesn’t really improve 3d-printing performance. At least not until you get to ridiculous builds (think minuteman)

    Cost will be interesting, as apparently the H2D was touted to “be above current X1 line”, if that were to include X1E and the $2500 price tag it would be… rather expensive.
    But even when it’s “just” more expensive than the X1C at $1200/$1450, coming to… idk, $1500 in it’s bare configuration, that’s rather big chunk of change for a hobbyist. And they will (hopefully) have lost lots of enthusiasts with their firmware-stunt.

    Something kinda cool that could theoretically be done would be print smoothing with the laser. Print it, change the tool, laser (at least) the stairstepping on top away.



  • The whole idea is they should be safer than us at driving. It only takes fog (or a painted wall) to conclude that won’t be achieved with cameras only.

    Well, I do still think that cameras could reach “superhuman” levels of safety.
    (very dense) Fog makes the cameras useless, A self driving car would have to slow way down / shut itself off. If they are part of a variety of inputs they drop out as well, reducing the available information. How would you handle that then? If that would have to drop out/slow down as much, you gain nothing again /e: my original interpretation is obviously wrong, you get the additional information whenever the environment permits.
    And for the painted wall. Cameras should be able to detect that. It’s just that Tesla presumably hasn’t implemented defenses against active attacks yet.

    You had a lot of hands in this paragraph. 😀
    I like to keep spares on me.

    I’m exceptionally doubtful that the related costs were anywhere near this number.

    cost has been developing rapidly. Pretty sure several years ago (about when tesla first started announcing to be ready in a year or two) it was in the tens of thousands. But you’re right, more current estimations seem to be more in the range of $500-2000 per unit, and 0-4 units per car.

    it’s inconceivable to me that cameras only could ever be as safe as having a variety of inputs.
    Well, diverse sensors always reduce the chance of confident misinterpretation.
    But they also mean you can’t “do one thing, and do it well”, as now you have to do 2-4 things (camera, lidar, radar, sonar) well. If one were to get to the point where you have either one really good data-source, or four really shitty ones, it becomes conceivable to me.

    From what I remember there is distressingly little oversight for allowing self-driving-cars on the road, as long as the Company is willing to be on the hook for accidents.





  • They do.

    But “all self driving cars” are practically only from waymo.
    Level 4 Autonomy is the point at which it’s not required that a human can intercede at any moment, and as such has to be actively paying attention and be sober.
    Tesla is not there yet.

    On the other hand, this is an active attack against the technology.
    Mirrors or any super-absorber (possibly vantablack or similar) would fuck up LIDAR. Which is a good reason for diversifying the Sensors.

    On the other hand I can understand Tesla going “Humans use visible light only, in principle that has to be sufficient for a self driving car as well”, because, in principle I agree. In practice… well, while this seems much more click-bait than an actual issue for a self-driving taxi, diversifying your Input chain makes a lot of sense in my book. On the other hand, if it would cost me 20k more down the road, and Cameras would reach the same safety, I’d be a bit pissed.





  • that is more editing and good lighting.
    there are always layer lines.

    Depending on you I would recommend
    Voron (DIY, will take about a week to build and a week to tune),
    Prusa (depends on your preference, assemble yourself or built, depending on required time a mk4s with mmu3, or the core one which will take several months to get to you, and the mmu3 later when it will become compatible)
    Qidi (cheap, chinese, will likely work decent after some tuning)