I created a solution…with a pi 4 but it just doesn’t seem to work very well. OCR is very finicky and while I was able to get pytesseract to pull the images off of a webcam, the numbers that get returned are very wrong. It looks like they only allow businesses to pull the powermeter data if I am reading this right: https://www.pge.com/en/save-energy-and-money/energy-saving-programs/smartmeter.html

My rate has increased 6 times this year, so power is very expensive here: 50c per KWH…on the lowest consumption rate. I need to figure out how to cut back or get solar panels. But I want to see in near real time how much energy we are using.

  • litchralee@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    3 days ago

    I suspect that PG&E’s smart meters might: 1) support an infrared pulse through an LED on the top of the meter, and 2) use a fairly-open protocol for uploading their meter data to the utility, which can be picked up using a Software Defined Radio (SDR).

    Open Energy Monitor has a write-up about using the pulse output, where each pulse means a quantity of energy was delivered (eg 1 Watt-hour). So counting 1000 of such pulses would be 1 kWh, and that would be a way to track your energy consumption for any timescale.

    What it won’t do is provide instantaneous power (ie kW drawn at this very moment) because the energy must accumulate to the threshold before sending a pulse. For example, a 9 Watt LED bulb that is powered on would only cause a new pulse every 6.7 minutes. But for larger loads, the indication would be very quick; a 5000 W dryer would emit a new pulse after no more than 0.72 seconds.

    The other option is decoding the wireless protocol, which people have done using FOSS software. An RTL-SDR receiver is not very expensive, is very popular, and can also be used for other purposes besides monitoring the electric meter. Insofar as USA law is concerned, unencrypted transmissions are fair game to receive and decode. This method also has a wealth of other useful info in the data stream, such as instantaneous wattage in addition to the counter registers.

  • e0qdk@reddthat.com
    link
    fedilink
    arrow-up
    4
    ·
    3 days ago

    Don’t know about PGE’s API, but for the OCR stuff, you may get better results with additional preprocessing before you pass images into tesseract. e.g. crop it to just the region of interest, and try various image processing techniques to make the text pop out better if needed. You can also run tesseract specifically telling it you’re looking at a single line of data, which can give better results. (e.g. --psm 7 for the command line tool) OCR is indeed finicky…

    • mesamune@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Ill try it out. I put psm 11 at one point, but got 000000 back out unless my camera was just in the perfect spot. Then i would get around 1/2 the numbers right.

  • mesamune@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago
    import cv2
    import easyocr
    import numpy as np
    from PIL import Image
    from collections import Counter
    
    # Initialize the EasyOCR reader
    reader = easyocr.Reader(['en'])
    
    def preprocess_image(image):
        # Convert to PIL image for EasyOCR processing
        return Image.fromarray(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
    
    def recognize_text(image):
        processed_image = preprocess_image(image)
        results = reader.readtext(np.array(processed_image), allowlist='0123456789')
        # Concatenate all recognized text results
        recognized_text = ''.join(result[1] for result in results)
        return recognized_text
    
    def format_number(text, length=6):
        # Remove non-numeric characters and pad with zeros if necessary
        formatted = ''.join(filter(str.isdigit, text))
        return formatted.zfill(length)[-length:]
    
    def most_common_number(numbers):
        # Find the most common number from the list of numbers
        counter = Counter(numbers)
        most_common = counter.most_common(1)
        return most_common[0][0] if most_common else ''
    
    def main():
        cap = cv2.VideoCapture(2)
    
        if not cap.isOpened():
            print("Error: Could not open webcam.")
            return
    
        print("Press 'q' to quit.")
        frame_count = 0
        text_history = []
    
        while True:
            ret, frame = cap.read()
            if not ret:
                print("Error: Failed to capture image.")
                break
    
            # Recognize text from the current frame
            recognized_text = recognize_text(frame)
            formatted_number = format_number(recognized_text)
    
            # Update the history with the latest recognized number
            text_history.append(formatted_number)
    
            # Keep only the last 10 frames
            if len(text_history) > 20:
                text_history.pop(0)
    
            # Determine the most common number from the history
            most_common = most_common_number(text_history)
            print(f"Most common number from last 10 frames: {most_common}")
    
            # if cv2.waitKey(1) & 0xFF == ord('q'):
            #     break
    
        cap.release()
    
    if __name__ == "__main__":
        main()
    
    
    • mesamune@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      the easyocr version somewhat works…but still having issues when its actually outside. Thought I would ask if anyone has figured out this issue before. Ill keep hacking away at it, but thought I would ask if the API is viable.