What papers or textbooks do i need to read to have all the basics / background knowledge to use pytorch and understand what I am doing based on solely the documentation pytorch provides?

  • TropicalDingdong@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    11 months ago

    I mean what are you trying to do?

    It’s about the same as tensorfkiw in that it’s a largely abstracted language.

    Its lego blocks with ML components. I think the dataset builder/ pipelines are better than tf. I think tf is a bit nicer if you need to do some quick and dirty assements of model performance.

    I would say just hop into hugging face and pull down some models/ repos and start playing around.

    More broadly it’s more important to understand bug picture concepts in ai ml rather than some language in particular, especially around frameworks and data.

    • AnarchistsForDemocracy@lemmy.worldOP
      link
      fedilink
      arrow-up
      4
      ·
      11 months ago

      I mean what are you trying to do?

      I am trying to understand how an example i just coded using pytorch actually works, how the convuluted network works, meaning what are the arguments of conv2 or what it’s called and what is relu. I am digging through the documentation but I am missing a lot of basics as it seems.

      My best bet was to read papers, but since this is already a couple years into the whole deep learning thing it is quite a challenge to idenity the foundational papers among the many that just repeat them.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        11 months ago

        I think if you haven’t found them yet, some three blue one brown videos might be helpful.

        Like it’s important to get an understanding for how the big picture stuff works conceptually, but realistically, you will probably just be making minor modifications to existing frameworks. The framworks, have really ended up being almost more important in these most recent vintages of models, where the previous generations of models were very much architecture solutions.

        So in that regard, it’s more important to focus on understanding the frame works around self learning, attention, generative and discriminative approaches etc…

        After that, maybe you could answer a question for me.

        What is it you want to do? Do you want to build models? Do you want to develop frameworks? Do you want to work on algorithms?

        Because each of these really requires it’s own skillset, and while they have some overlap, most people don’t do everything.

          • TropicalDingdong@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            11 months ago

            I mean that’s a pretty massive undertaking.

            If that’s your goal, don’t bother with pytorch at all.

            Start by implementing the individual algorithms required to do a simple machine learning algorithm from scratch (only numpy).

            You need to learn and be able to encode back propagation, Adam, sigmoid, etc… I can’t remember them all off hand but it’s like maybe 4 or 5 different functions in total.

            There are many tutorials for this. If you need me to, I can link you to some.

            This is a great way to get the basics down, however, beware that things like pytorch are ultimately collaborative projects involving thousands of team members incorporating advancements and research from all kinds of sources.

          • Newtra@pawb.social
            link
            fedilink
            arrow-up
            2
            ·
            11 months ago

            Honestly, I don’t think that there’s room for a competitor until a whole new paradigm is found. PyTorch’s community is the biggest and still growing. With their recent focus on compilation, not only are TF and Jax losing any chance at having an advantage, but the barrier to entry for new competitors is becoming much higher. Compilation takes a LOT of development time to implement, and it’s hard to ignore 50-200% performance boosts.

            Community size tends to ultimately drive open source software adoption. You can see the same with the web frameworks - in the end, most people didn’t learn React because it was the best available library, they learned it because the massive community had published so many tutorials and driven so many job adverts that it was a no-brainer to choose it over Angular, Vue, etc. Only the paradigm-shift libraries like Svelte and Htmx have had a chance at chipping away at React’s dominance.

      • jacksilver@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        I wouldn’t focus on foundational papers, the current phase of deep learning is far enough along that there are better tutorials/resources that better distill how these models work.

        I would actually recommend you look into books on deep learning or something like a udemy course (Harvard or Stanford may also have free courses online, but I’ve never been a fan of their pacing) . I can send you some recommendations if you want, but that’s probably the best/fastest way.

      • howrar@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        11 months ago

        I know you said you couldn’t find what you were looking for in the docs, but just in case you were looking in the wrong place:

        • Conv2d gives you the exact mathematical formula that’s implemented along with some examples.
        • ReLU does the same and is even simpler.

        Besides the convolution operator, I believe all the math should have been covered in high school (summation, max, and basic arithmetics). And convolution is also just defined in terms of these same operations, so you should be able to understand the definition (See the discrete definition in the wiki page under the “cross corrosion of deterministic signals” section).

        The math does look daunting if it’s your first time encountering them (I’ve been there), and sometimes all you really need to confirmation that you already have all the requisite knowledge.

        • AnarchistsForDemocracy@lemmy.worldOP
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          11 months ago

          thank you yeah, I found the conv2d and reLu on pytorch’s home page i am struggling with the arguments that conv2d accepts and i just realized i need to refresh linear algebra first.

          Learning about hermitians and transposed and inverted matrices, tbh i remembered how to multiply and about the determinant and all that but there is a lot that i forgot. So i am digging through the matrix cookbook currently also reading a book on deep learning in parallel.

          I am trying to find a way to get a hold and read this paper: Krizhevsky, A., Sutskever, I. & Hinton, G. ImageNet classification with deep convolutional neural networks. In Proc. Advances in Neural Information Processing Systems 25 1090-1098 (2012)

          but have failed so far…

  • Newtra@pawb.social
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    11 months ago

    The easiest way to get the basics is to search for articles, online courses, and youtube videos about the specific modules you’re interested in. Papers are written for people who are already deep in the field. You’ll get there, but they’re not the most efficient way to get up to speed. I have no experience with textbooks.

    It helps to think of PyTorch as just a fancy math library. It has some well-documented frameworky structure (nn.Module) and a few differentiation engines, but all the deep learning-specific classes/functions (Conv2d, BatchNorm1d, ReLU, etc.) are just optimized math under the hood.

    You can see the math by looking for projects that reimplement everything in numpy, e.g. picoGPT or ConvNet in NumPy.

    If you can’t get your head around the tensor operations, I suggest searching for “explainers”. Basically for every impactful module there will be a bunch of “(module) Explained” articles or videos out there, e.g. Grouped Convolution, What are Residual Connections. There are also ones for entire models, e.g. The Illustrated Transformer. Once you start googling specific modules’ explainers, you’ll find people who have made mountains of them - I suggest going through their guides and learning everything that seems relevant to what you’re working on.

    If you’re not getting an explanation of something, just google and find another one. People have done an incredible job of making this information freely accessible in many different formats. I basically learned my way from webdev to an AI career with a couple years of casually watching YouTube videos.