Day 11: Plutonian Pebbles
Megathread guidelines
- Keep top level comments as only solutions, if you want to say something other than a solution put it in a new post. (replies to comments can be whatever)
- You can send code in code blocks by using three backticks, the code, and then three backticks or use something such as https://topaz.github.io/paste/ if you prefer sending it through a URL
FAQ
- What is this?: Here is a post with a large amount of details: https://programming.dev/post/6637268
- Where do I participate?: https://adventofcode.com/
- Is there a leaderboard for the community?: We have a programming.dev leaderboard with the info on how to join in this post: https://programming.dev/post/6631465
Haskell
import Data.Monoid import Control.Arrow data Tree v = Tree (Tree v) v (Tree v) -- https://stackoverflow.com/questions/3208258 memo1 f = index nats where nats = go 0 1 go i s = Tree (go (i + s) s') (f i) (go (i + s') s') where s' = 2 * s index (Tree l v r) i | i < 0 = f i | i == 0 = v | otherwise = case (i - 1) `divMod` 2 of (i', 0) -> index l i' (i', 1) -> index r i' memo2 f = memo1 (memo1 . f) blink = memo2 blink' where blink' c n | c == 0 = 1 | n == 0 = blink c' 1 | even digits = blink c' l <> blink c' r | otherwise = blink c' $ n * 2024 where digits = succ . floor . logBase 10 . fromIntegral $ n (l, r) = n `divMod` (10 ^ (digits `div` 2)) c' = pred c doBlinks n = getSum . mconcat . fmap (blink n) part1 = doBlinks 25 part2 = doBlinks 75 main = getContents >>= print . (part1 &&& part2) . fmap read . words
Rust
Part 2 is solved with recursion and a cache, which is indexed by stone numbers and remaining rounds and maps to the previously calculated expansion size. In my case, the cache only grew to 139320 entries, which is quite reasonable given the size of the result.
Solution
use std::collections::HashMap; fn parse(input: String) -> Vec<u64> { input .split_whitespace() .map(|w| w.parse().unwrap()) .collect() } fn part1(input: String) { let mut stones = parse(input); for _ in 0..25 { let mut new_stones = Vec::with_capacity(stones.len()); for s in &stones { match s { 0 => new_stones.push(1), n => { let digits = s.ilog10() + 1; if digits % 2 == 0 { let cutoff = 10u64.pow(digits / 2); new_stones.push(n / cutoff); new_stones.push(n % cutoff); } else { new_stones.push(n * 2024) } } } } stones = new_stones; } println!("{}", stones.len()); } fn expansion(s: u64, rounds: u32, cache: &mut HashMap<(u64, u32), u64>) -> u64 { // Recursion anchor if rounds == 0 { return 1; } // Calculation is already cached if let Some(res) = cache.get(&(s, rounds)) { return *res; } // Recurse let res = match s { 0 => expansion(1, rounds - 1, cache), n => { let digits = s.ilog10() + 1; if digits % 2 == 0 { let cutoff = 10u64.pow(digits / 2); expansion(n / cutoff, rounds - 1, cache) + expansion(n % cutoff, rounds - 1, cache) } else { expansion(n * 2024, rounds - 1, cache) } } }; // Save in cache cache.insert((s, rounds), res); res } fn part2(input: String) { let stones = parse(input); let mut cache = HashMap::new(); let sum: u64 = stones.iter().map(|s| expansion(*s, 75, &mut cache)).sum(); println!("{sum}"); } util::aoc_main!();
Also on github
Dart
I really wish Dart had memoising built in. Maybe the new macro feature will allow this to happen, but in the meantime, here’s my hand-rolled solution.
import 'package:collection/collection.dart'; var counter_ = <(int, int), int>{}; int counter(s, r) => counter_.putIfAbsent((s, r), () => _counter(s, r)); int _counter(int stone, [int rounds = 25]) => (rounds == 0) ? 1 : next(stone).map((e) => counter(e, rounds - 1)).sum; List<int> next(int s) { var ss = s.toString(), sl = ss.length; if (s == 0) return [1]; if (sl.isOdd) return [s * 2024]; return [ss.substring(0, sl ~/ 2), ss.substring(sl ~/ 2)] .map(int.parse) .toList(); } solve(List<String> lines, [count = 25]) => lines.first.split(' ').map(int.parse).map((e) => counter(e, count)).sum;
Python
Part 1: ~2 milliseconds, Part 2: ~35 milliseconds, Total Time: ~35 milliseconds
You end up doing part 1 at the same time as part 2 but because of how Advent of Code works, you need to rerun the code after part 1 is solved. so Part 2 is technically total time.Fast Code
from time import time_ns transform_cache = {} def transform(current_stone): if current_stone == "0": res = ["1"] else: length = len(current_stone) if length % 2 == 0: mid = length // 2 res = [str(int(current_stone[:mid])), str(int(current_stone[mid:]))] else: res = [str(int(current_stone) * 2024)] transform_cache[current_stone] = res return res def main(initial_stones): stones_count = {} for stone in initial_stones: stones_count[stone] = stones_count.get(stone, 0) + 1 part1 = 0 for i in range(75): new_stones_count = {} for stone, count in stones_count.items(): for r in (transform_cache.get(stone) if stone in transform_cache else transform(stone)): new_stones_count[r] = new_stones_count.get(r, 0) + count stones_count = new_stones_count if i == 24: part1 = sum(stones_count.values()) return part1,sum(stones_count.values()) if __name__ == "__main__": with open('input', 'r') as f: input_data = f.read().replace('\r', '').replace('\n', '').split() start_time = time_ns() part_one, part_two = main(input_data) stop_time = time_ns() - start_time time_len = min(9, ((len(str(stop_time))-1)//3)*3) time_conversion = {9: 'seconds', 6: 'milliseconds', 3: 'microseconds', 0: 'nanoseconds'} print(f"Part 1: {part_one}\nPart 2: {part_two}\nProcessing Time: {stop_time / (10**time_len)} {time_conversion[time_len]}")
And now we get into the days where caching really is king. My first attempt didn’t go so well, I tried to handle the full list result as one cache step, instead of individually caching the result of calculating each stone per step.
I think my original attempt is still calculating at home, but I finished up this much better version on the trip to work.
All hail public transport.C#
List<long> stones = new List<long>(); public void Input(IEnumerable<string> lines) { stones = string.Concat(lines).Split(' ').Select(v => long.Parse(v)).ToList(); } public void Part1() { var expanded = TryExpand(stones, 25); Console.WriteLine($"Stones: {expanded}"); } public void Part2() { var expanded = TryExpand(stones, 75); Console.WriteLine($"Stones: {expanded}"); } public long TryExpand(IEnumerable<long> stones, int steps) { if (steps == 0) return stones.Count(); return stones.Select(s => TryExpand(s, steps)).Sum(); } Dictionary<(long, int), long> cache = new Dictionary<(long, int), long>(); public long TryExpand(long stone, int steps) { var key = (stone, steps); if (cache.ContainsKey(key)) return cache[key]; var result = TryExpand(Blink(stone), steps - 1); cache[key] = result; return result; } public IEnumerable<long> Blink(long stone) { if (stone == 0) { yield return 1; yield break; } var str = stone.ToString(); if (str.Length % 2 == 0) { yield return long.Parse(str[..(str.Length / 2)]); yield return long.Parse(str[(str.Length / 2)..]); yield break; } yield return stone * 2024; }
Haskell
Sometimes I want something mutable, this one takes 0.3s, profiling tells me 30% of my time is spent creating new objects. :/
import Control.Arrow import Data.Map.Strict (Map) import qualified Data.Map.Strict as Map import qualified Data.Maybe as Maybe type StoneCache = Map Int Int type BlinkCache = Map Int StoneCache parse :: String -> [Int] parse = lines >>> head >>> words >>> map read memoizedCountSplitStones :: BlinkCache -> Int -> Int -> (Int, BlinkCache) memoizedCountSplitStones m 0 _ = (1, m) memoizedCountSplitStones m i n | Maybe.isJust maybeMemoized = (Maybe.fromJust maybeMemoized, m) | n == 0 = do let (r, rm) = memoizedCountSplitStones m (pred i) (succ n) let rm' = cacheWrite rm i n r (r, rm') | digitCount `mod` 2 == 0 = do let (r1, m1) = memoizedCountSplitStones m (pred i) firstSplit let (r2, m2) = memoizedCountSplitStones m1 (pred i) secondSplit let m' = cacheWrite m2 i n (r1+r2) (r1 + r2, m') | otherwise = do let (r, m') = memoizedCountSplitStones m (pred i) (n * 2024) let m'' = cacheWrite m' i n r (r, m'') where secondSplit = n `mod` (10 ^ (digitCount `div` 2)) firstSplit = (n - secondSplit) `div` (10 ^ (digitCount `div` 2)) digitCount = succ . floor . logBase 10 . fromIntegral $ n maybeMemoized = cacheLookup m i n foldMemoized :: Int -> (Int, BlinkCache) -> Int -> (Int, BlinkCache) foldMemoized i (r, m) n = (r + r2, m') where (r2, m') = memoizedCountSplitStones m i n cacheWrite :: BlinkCache -> Int -> Int -> Int -> BlinkCache cacheWrite bc i n r = Map.adjust (Map.insert n r) i bc cacheLookup :: BlinkCache -> Int -> Int -> Maybe Int cacheLookup bc i n = do sc <- bc Map.!? i sc Map.!? n emptyCache :: BlinkCache emptyCache = Map.fromList [ (i, Map.empty) | i <- [1..75]] part1 = foldl (foldMemoized 25) (0, emptyCache) >>> fst part2 = foldl (foldMemoized 75) (0, emptyCache) >>> fst main = getContents >>= print . (part1 &&& part2) . parse
Some nice monadic code patterns going on there, passing the cache around! (You might want to look into the State monad if you haven’t come across it before)
Nim
Runtime: 2 us
I’m not very experienced with recursion and memoization, so this took me quite a while.template split(n: int): seq[int] = let ns = $n @[parseInt(ns[0..<ns.len div 2]), parseInt(ns[ns.len div 2..^1])] template applyRule(stone: int): seq[int] = if stone == 0: @[1] elif ($stone).len mod 2 == 0: split(stone) else: @[stone * 2024] proc stoneCount(stone: int, targetDepth: int): int = var memo {.global.}: Table[(int, int), int] if (stone,targetdepth) in memo: return memo[(stone,targetdepth)] var depth = 0 proc rule(st: int): seq[int] = var memo {.global.}: Table[int, seq[int]] if st in memo: return memo[st] result = stone.applyRule memo[st] = result if depth == targetDepth: return 1 for st in rule(stone): result += st.stoneCount(targetDepth - 1) memo[(stone,targetdepth)] = result proc solve(input: string): AOCSolution[int, int] = for stone in input.split.map(parseInt): result.part1 += stone.stoneCount(25) result.part2 += stone.stoneCount(75)
C#
public class Day11 : Solver { private long[] data; private class TreeNode(TreeNode? left, TreeNode? right, long value) { public TreeNode? Left = left; public TreeNode? Right = right; public long Value = value; } private Dictionary<(long, int), long> generation_length_cache = []; private Dictionary<long, TreeNode> subtree_pointers = []; public void Presolve(string input) { data = input.Trim().Split(" ").Select(long.Parse).ToArray(); List<TreeNode> roots = data.Select(value => new TreeNode(null, null, value)).ToList(); List<TreeNode> last_level = roots; subtree_pointers = roots.GroupBy(root => root.Value) .ToDictionary(grouping => grouping.Key, grouping => grouping.First()); for (int i = 0; i < 75; i++) { List<TreeNode> next_level = []; foreach (var node in last_level) { long[] children = Transform(node.Value).ToArray(); node.Left = new TreeNode(null, null, children[0]); if (subtree_pointers.TryAdd(node.Left.Value, node.Left)) { next_level.Add(node.Left); } if (children.Length <= 1) continue; node.Right = new TreeNode(null, null, children[1]); if (subtree_pointers.TryAdd(node.Right.Value, node.Right)) { next_level.Add(node.Right); } } last_level = next_level; } } public string SolveFirst() => data.Select(value => GetGenerationLength(value, 25)).Sum().ToString(); public string SolveSecond() => data.Select(value => GetGenerationLength(value, 75)).Sum().ToString(); private long GetGenerationLength(long value, int generation) { if (generation == 0) { return 1; } if (generation_length_cache.TryGetValue((value, generation), out var result)) return result; TreeNode cur = subtree_pointers[value]; long sum = GetGenerationLength(cur.Left.Value, generation - 1); if (cur.Right is not null) { sum += GetGenerationLength(cur.Right.Value, generation - 1); } generation_length_cache[(value, generation)] = sum; return sum; } private IEnumerable<long> Transform(long arg) { if (arg == 0) return [1]; if (arg.ToString() is { Length: var l } str && (l % 2) == 0) { return [int.Parse(str[..(l / 2)]), int.Parse(str[(l / 2)..])]; } return [arg * 2024]; } }
Haskell
Yay, mutation! Went down the route of caching the expanded lists of stones at first. Oops.
import Data.IORef import Data.Map.Strict (Map) import Data.Map.Strict qualified as Map blink :: Int -> [Int] blink 0 = [1] blink n | s <- show n, l <- length s, even l = let (a, b) = splitAt (l `div` 2) s in map read [a, b] | otherwise = [n * 2024] countExpanded :: IORef (Map (Int, Int) Int) -> Int -> [Int] -> IO Int countExpanded _ 0 = return . length countExpanded cacheRef steps = fmap sum . mapM go where go n = let key = (n, steps) computed = do result <- countExpanded cacheRef (steps - 1) $ blink n modifyIORef' cacheRef (Map.insert key result) return result in readIORef cacheRef >>= maybe computed return . (Map.!? key) main = do input <- map read . words <$> readFile "input11" cache <- newIORef Map.empty mapM_ (\steps -> countExpanded cache steps input >>= print) [25, 75]
Does the IORef go upwards the recursion tree? If you modify the IORef at some depth of 15, does the calling function also receive the update, is there also a Non-IO-Ref?