• 5 Posts
  • 3 Comments
Joined 1 year ago
cake
Cake day: October 12th, 2023

help-circle





  • mrhtoHaskellCSV Parsing
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I should have been clearer that it’s initial input to the program, not queried input during the middle of its runtime.

    I think what’s easiest is reading in the named records as hashmaps and dealing with them that way, so that I can do filters/comparisons on the keys (which are the headers). I don’t need anything more specialized that warrants creating a new FromRecord instance.


  • mrhtoHaskellCSV Parsing
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Right Map is polymorphic while records are concrete, make sense.

    Those parser definitions in the source are enlightening, I see now I was thinking a bit narrowly about how e.g. parseNamedRecord could be instantiated for a type.


  • mrhtoHaskellCSV Parsing
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Ah yeah thanks that works! I didn’t think to look for already pre-defined instances of the typeclass. Though I still wonder how the Parser for Map/HashMap is defined, since Maps can hold an arbitrary number of values rather than a record’s fixed number of fields.

    I also wonder why (it seems) I need to put a type annotation that I want to decode the incoming data as a Map, since you don’t have to do that if you call e.g. a field accessor method on one of your custom record instances. That is, if you try to call Map.size on an incoming CSV record, you need to explicitly annotate with a type annotation that you want that record decoded as a Map. Whereas if you have a record data Foo = Foo {bar :: String} and define its parser in the standard way, you can just call bar on the incoming CSV record and it will work, no annotation required.


Moderates