PV forecast: purely with weather and historical values

Preview PV forecast: purely with weather and historical values

Most PV forecasts are based on standard weather models. However, these do not know whether a tree is shading your panels or how efficient your system really is in diffuse light. The solution: Historical comparison. We look at how much electricity your system has delivered in the past with exactly the same cloud cover: No matter what angle or how many different PV surfaces: Only the historical PV yields and cloud cover levels of past days are used: compared to the cloud cover forecast of today or the coming days. I have created an HACS extension for a quick setup. Alternatively, the forecast also works with Home Assistant Board means: purely the standard weather forecast and an SQL sensor with value template. Optionally, a markdown card can be created to display the cloud cover levels in order to understand the calculation:

The problem with classic PV forecasts

External PV forecasting services (Forecast.Solar, Solcast & Co.) estimate the yield on the basis of weather data and configured system output. This works for rough planning, but is often too inaccurate for daily operation:

  • The actual shading caused by trees, buildings or snow is not taken into account
  • The effective output of the system (ageing, soiling) is unknown
  • Local weather phenomena such as ground fog or rapidly changing cloud cover are incorrectly assessed

Better: The knowledge about the yield and cloud cover is already in the Home Assistant database - you just have to read it out.

Beta: HACS integration

PV History Forecast is a custom integration for Home Assistant (installable via HACS), which accesses the SQLite database of Home Assistant directly. The SQL query used searches for historical days with similar cloud cover for the whole day and the rest of the day.
Installation: HACS: Custom Repository
Type: Integration
These comparison days are then:
  • Seasonally scaled - a summer day is normalized to the current autumn day, taking into account day length and position of the sun (astronomically correct formula by latitude)
  • Weighted - the more a comparison day resembles today's cloud cover, the more it is included in the forecast
  • Averaged or interpolated - depending on the available data (weighted average, light reduction or max assumption)
The result: A residual forecast for today based on real data from your own system.

Generated sensors

After installation and configuration (default prefix: `pv_hist`), the following sensors are available:

Sensor Meaning
sensor.pv_hist_remaining_today Expected residual yield today in kWh (main sensor)
sensor.pv_hist_remaining_min Pessimistic daily residual (cloudier similar days)
sensor.pv_hist_remaining_max Optimistischer Tagesrest (hellere ähnliche Tage)
sensor.pv_hist_tomorrow Weighted optimistic day remainder (brighter similar days) forecast for total yield tomorrow in kWh
sensor.pv_hist_weather_forecast Internal auxiliary sensor: hourly weather forecast as JSON
sensor.pv_hist_cloud_coverage Automatic cloud sensor (if no external sensor is selected)

The main sensor `sensor.pv_hist_remaining_today` also contains the attribute `lovelace_card` - a fully rendered Markdown map that can be integrated directly into the dashboard.

Simplified function:

  • cloud_history: Reads historical cloud cover values from the HA statistics (LTS)
  • matching_days: Searches for days on which the cloud cover average in the rest of the day is similar to the current one
  • final_data: Calculates the scaled yield for each comparison day and returns the result as JSON

The scaling between the comparison day and today's day is carried out using an astronomically correct day length formula:

dl = 24/π · arccos(−tan(φ) · tan(δ))
δ  = −0.4093 · cos(2π · (Tag + 10) / 365)

There is also a seasonal snow detection (December-February): If yesterday's yield was conspicuously low given the available solar potential, a snow factor is applied.

Installation

The integration is installed via HACS as a custom repository: Open HACS → ⋮ → Add custom repository

URL:

LiBe-net/ha_pv_history_forecast
  • In HACS: Search for and install "PV History Forecast"
  • Restart Home Assistant
  • Settings → Devices & Services → Integrations → Add "PV History Forecast"

Configuration

Configuration is carried out completely via the HA user interface in two steps.

Step 1 - Prefix & database

Field Standard Description
Sensor prefix pv_hist Basis for all sensor names
Database path (empty) Leave empty = standard HA database

Step 2 - Sensors

Field Mandatory Description
Weather entity `weather.*`-Entity, z. B. `weather.forecast_home`
PV energy sensor Sensor with `device_class: energy` and active statistics; Wh is automatically converted to kWh
Cloud sensor Optional Sensor with unit `%`; empty = auto sensor is created
Progress days Standard: 30 Number of historical days for comparison

Tip: The dropdowns in the configuration step only show suitable sensors - PV sensors are filtered by `device_class: energy` and active statistics, cloud sensors by unit `%`.

The Lovelace Dashboard

The finished markdown card is available as an attribute of the main sensor and shows:

  • The calculated residual forecast in kWh as a headline
  • The current cloud cover average and the calculation method used
  • A table of historical comparison days with day cloud cover, day yield, residual cloud cover, residual yield and percentage influence weight

Integration into a Lovelace Markdown Card:

type: markdown
content: "{{ state_attr('sensor.pv_hist_remaining_today', 'lovelace_card') }}"

Requirements

  • Home Assistant 2024.1.0 or newer
    HACS installed and set up
    PV sensor: `state_class: total_increasing`, unit `kWh` or `Wh`, active statistics
    Cloud cover data: Weather entity with `cloud_coverage` in the forecast response (or external cloud sensor)

Without HACS: purely with the SQL integration

Before I created the HACS integration on Github, I implemented the forecast purely using the SQL integration: Rebuild here:

Requirements Sensor for weather forecast

Sensor name normally: weather.forecast_home

Save forecast data

Customization: configuration.yaml: Save weather forecast in a sensor: weather.forecast_hourly

[+]
template:
  - trigger:
      - platform: time_pattern
        minutes: /15
    action:
      - service: weather.get_forecasts
        data:
          type: hourly
        target:
          entity_id: weather.forecast_home
        response_variable: hourly
    sensor:
      - name: weather.forecast_hourly
        unique_id: weather.forecast_hourly
        state: "{{ now().isoformat() }}"
        attributes:
          forecast:  "{{ hourly['weather.forecast_home'].forecast }}"

Optional: Use more than 10 days for the calculation.

For more than 10 days we need an auxiliary sensor so that the cloud cover values are written to the long-term statistics.

Sensor name: weather.cloud_coverage

{{ state_attr("weather.home",'cloud_coverage') | float}}

SQL sensor with value template

To create the SQL sensor, we only need 3 sensors for the source data:

  • weather.forecast_home contains the historical cloud cover levels (default sensor, after setting up Home Assistant.
  • sensor.weather_forecast_hourly the forecast (prerequisite: installation in configuration.yaml as described above)
  • sensor.pv_panels_energy: Total counter for the PV yield.

The names can be adapted accordingly at the start of the SQL sensor:

[+]
WITH vars AS (
    SELECT 
        'weather.forecast_home' as sensor_clouds,   -- Weather Entity direkt: cloud_coverage aus state_attributes
        'sensor.pv_panels_energy' as sensor_pv,
        'sensor.weather_forecast_hourly' as sensor_forecast,
        -- Berechnet den Versatz zwischen Lokalzeit und UTC (z.B. '+3600 seconds')
        -- Wird genutzt, um den Datumswechsel (00:00 Uhr) lokal zu triggern
        (strftime('%s', 'now', 'localtime') - strftime('%s', 'now')) || ' seconds' as offset
),

ids AS (
    /* Holt alle benötigten internen IDs für Statistiken und States aus der HA-Datenbank */
    SELECT 
        (SELECT id FROM statistics_meta WHERE statistic_id = (SELECT sensor_clouds FROM vars)) as w_id_stats,
        (SELECT metadata_id FROM states_meta WHERE entity_id = (SELECT sensor_clouds FROM vars)) as w_id_states,
        (SELECT id FROM statistics_meta WHERE statistic_id = (SELECT sensor_pv FROM vars) LIMIT 1) as p_id,
        (SELECT metadata_id FROM states_meta WHERE entity_id = (SELECT sensor_pv FROM vars) LIMIT 1) as p_id_states,
        (SELECT metadata_id FROM states_meta WHERE entity_id = (SELECT sensor_forecast FROM vars) LIMIT 1) as f_id
),

pv_activity AS (
    /* Ermittelt die Sonnenauf- und Untergangszeiten basierend auf der gestrigen PV-Produktion */
    SELECT 
        COALESCE((
            SELECT strftime('%H:%M', last_updated_ts, 'unixepoch') 
            FROM states 
            WHERE metadata_id = (SELECT p_id_states FROM ids) 
              AND date(last_updated_ts, 'unixepoch', (SELECT offset FROM vars)) = date('now', (SELECT offset FROM vars), '-1 day') 
              AND state NOT IN ('unknown','0','0.0','unavailable') 
            ORDER BY last_updated_ts ASC LIMIT 1
        ), '05:30') as sun_start,
        COALESCE((
            -- Letzter Zeitpunkt, an dem der kumulative Sensor noch UNTER seinem Tagesmaximum lag
            -- = letzter aktiver Produktionszeitpunkt (Sonnenuntergang).
            -- state DESC LIMIT 1 wäre falsch: kumulativer Sensor bleibt bis Mitternacht auf Max-Wert.
            SELECT strftime('%H:%M', last_updated_ts, 'unixepoch') 
            FROM states 
            WHERE metadata_id = (SELECT p_id_states FROM ids) 
              AND date(last_updated_ts, 'unixepoch', (SELECT offset FROM vars)) = date('now', (SELECT offset FROM vars), '-1 day') 
              AND state NOT IN ('unknown', 'unavailable', '')
              AND CAST(state AS FLOAT) < (
                  SELECT MAX(CAST(state AS FLOAT))
                  FROM states
                  WHERE metadata_id = (SELECT p_id_states FROM ids)
                    AND date(last_updated_ts, 'unixepoch', (SELECT offset FROM vars)) = date('now', (SELECT offset FROM vars), '-1 day')
                    AND state NOT IN ('unknown', 'unavailable', '')
              )
            ORDER BY last_updated_ts DESC LIMIT 1
        ), '17:30') as sun_end
    FROM ids
),

forecast_val AS (
    /* Berechnet die durchschnittliche Bewölkung für den verbleibenden Teil des aktuellen Tages */
    SELECT COALESCE(
        (SELECT AVG(CAST(json_extract(f.value, '$.cloud_coverage') AS FLOAT)) 
         FROM states s 
         JOIN state_attributes a ON s.attributes_id = a.attributes_id, 
         json_each(a.shared_attrs, '$.forecast') f 
         WHERE s.metadata_id = (SELECT f_id FROM ids) 
           AND s.last_updated_ts = (SELECT MAX(last_updated_ts) FROM states WHERE metadata_id = (SELECT f_id FROM ids)) 
           -- Abgleich des Forecast-Datums mit dem lokalen "Heute" (via Offset)
           AND substr(json_extract(f.value, '$.datetime'), 1, 10) = date('now', (SELECT offset FROM vars))
           AND substr(json_extract(f.value, '$.datetime'), 12, 5) 
               BETWEEN CASE 
                         WHEN strftime('%H:%M', 'now') > (SELECT sun_start FROM pv_activity) THEN strftime('%H:%M', 'now') 
                         ELSE (SELECT sun_start FROM pv_activity) 
                       END
               AND (SELECT sun_end FROM pv_activity)
        ), 50.0) as f_avg
),

forecast_next_day AS (
    /* Berechnet die durchschnittliche Bewölkung für den gesamten nächsten Tag */
    SELECT COALESCE((
        SELECT AVG(CAST(json_extract(f.value, '$.cloud_coverage') AS FLOAT)) 
        FROM states s 
        JOIN state_attributes a ON s.attributes_id = a.attributes_id, 
        json_each(a.shared_attrs, '$.forecast') f 
        WHERE s.metadata_id = (SELECT f_id FROM ids) 
          AND s.last_updated_ts = (SELECT MAX(last_updated_ts) FROM states WHERE metadata_id = (SELECT f_id FROM ids)) 
          AND substr(json_extract(f.value, '$.datetime'), 1, 10) = date('now', (SELECT offset FROM vars), '+1 day') 
          AND substr(json_extract(f.value, '$.datetime'), 12, 5) BETWEEN (SELECT sun_start FROM pv_activity) AND (SELECT sun_end FROM pv_activity)
    ), 50.0) as f_avg_morgen
),

cloud_history AS (
    /* Kombiniert Langzeit-Statistiken und kurzfristige States der Bewölkung für den historischen Vergleich */
    SELECT start_ts as ts, CAST(COALESCE(mean, state) AS FLOAT) as val 
    FROM statistics 
    WHERE metadata_id = (SELECT w_id_stats FROM ids) 
      AND start_ts > strftime('%s', 'now', '-60 days')
    UNION ALL
    SELECT s.last_updated_ts as ts, 
      CASE WHEN (SELECT sensor_clouds FROM vars) LIKE 'weather.%' 
           THEN CAST(json_extract(a.shared_attrs, '$.cloud_coverage') AS FLOAT) 
           ELSE CAST(s.state AS FLOAT) 
      END as val 
    FROM states s 
    LEFT JOIN state_attributes a ON s.attributes_id = a.attributes_id 
    WHERE s.metadata_id = (SELECT w_id_states FROM ids) 
      AND ((SELECT sensor_clouds FROM vars) LIKE 'weather.%' OR NOT EXISTS (SELECT 1 FROM statistics WHERE metadata_id = (SELECT w_id_stats FROM ids)))
      AND s.last_updated_ts > strftime('%s', 'now', '-10 days') 
      AND s.state NOT IN ('unknown', 'unavailable', '')
),

matching_days AS (
    /* Findet vergangene Tage, deren Bewölkungsprofil dem heutigen Forecast am nächsten kommt */
    SELECT 
        date(ts, 'unixepoch') as day, 
        AVG(CASE WHEN strftime('%H:%M', ts, 'unixepoch') BETWEEN (SELECT sun_start FROM pv_activity) AND (SELECT sun_end FROM pv_activity) THEN val END) as h_avg_total_val,
        AVG(CASE WHEN strftime('%H:%M', ts, 'unixepoch') >= strftime('%H:00', 'now') AND strftime('%H:%M', ts, 'unixepoch') <= (SELECT sun_end FROM pv_activity) THEN val END) as h_avg_rest_val
    FROM cloud_history 
    -- Filtert die Historie: Alles vor dem heutigen lokalen Tag (Offset-gesteuert)
    WHERE date(ts, 'unixepoch') < date('now', (SELECT offset FROM vars)) 
    GROUP BY 1 
    HAVING h_avg_total_val IS NOT NULL AND h_avg_total_val > 0
    ORDER BY ABS(
    COALESCE(h_avg_rest_val, h_avg_total_val) -- Fallback auf Gesamt-Schnitt wenn Rest null ist
    - (SELECT f_avg FROM forecast_val)
) ASC
),

final_data AS (
    /* Ermittelt die realen PV-Erträge der passendsten historischen Tage */
    SELECT 
        md.*,
        (SELECT MAX(state) FROM statistics WHERE metadata_id = (SELECT p_id FROM ids) AND date(start_ts, 'unixepoch') = md.day) as day_max,
        (SELECT MIN(state) FROM statistics WHERE metadata_id = (SELECT p_id FROM ids) AND date(start_ts, 'unixepoch') = md.day AND state > 0) as day_min,
        COALESCE((SELECT state FROM statistics WHERE metadata_id = (SELECT p_id FROM ids) AND date(start_ts, 'unixepoch') = md.day AND strftime('%H', start_ts, 'unixepoch') = strftime('%H', 'now') LIMIT 1), (SELECT MIN(state) FROM statistics WHERE metadata_id = (SELECT p_id FROM ids) AND date(start_ts, 'unixepoch') = md.day AND state > 0)) as h_hour_curr,
        COALESCE((SELECT state FROM statistics WHERE metadata_id = (SELECT p_id FROM ids) AND date(start_ts, 'unixepoch') = md.day AND strftime('%H', start_ts, 'unixepoch') = strftime('%H', 'now', '-1 hour') LIMIT 1), (SELECT MIN(state) FROM statistics WHERE metadata_id = (SELECT p_id FROM ids) AND date(start_ts, 'unixepoch') = md.day AND state > 0)) as h_hour_prev
    FROM matching_days md
)

/* Generiert das finale JSON-Objekt für Home Assistant */
SELECT json_group_array(
    json_object(
        'datum', day,
        'f_avg_heute_rest', (SELECT ROUND(f_avg, 1) FROM forecast_val),        
        'f_avg_morgen', (SELECT ROUND(f_avg_morgen, 1) FROM forecast_next_day),
        'h_avg_gesamt', ROUND(h_avg_total_val, 1),
        'h_avg_rest', ROUND(h_avg_rest_val, 1),
        'ertrag_tag_gesamt', ROUND(day_max - day_min, 2),
        -- Ertrag 0 nur wenn UTC-Jetztzeit ZWISCHEN pv_ende und lokalem Mitternacht (UTC).
        -- Vermeidet Fehlwert 0 in der Stunde nach lokalem Mitternacht (UTC 22-24 Uhr bei MEZ/MESZ),
        -- da '23:30' > '17:30' im Stringvergleich fälschlicherweise TRUE ergibt.
        -- Return 0 whenever the current UTC time is outside the PV-active window.
        -- sun_start and sun_end are stored in UTC (derived from HA state timestamps).
        -- This correctly covers all nighttime hours including 23:00-00:00 UTC
        -- (= 00:00-01:00 local CET), which the previous BETWEEN guard missed.
        'ertrag_tag_rest', ROUND(CASE 
            WHEN NOT (strftime('%H:%M', 'now') BETWEEN (SELECT sun_start FROM pv_activity) AND (SELECT sun_end FROM pv_activity))
                THEN 0.0
            ELSE MAX(0, 
                ((h_hour_curr - h_hour_prev) * (1.0 - (CAST(strftime('%M', 'now') AS FLOAT) / 60.0)) * 
                  CASE 
                    WHEN strftime('%H', 'now') = strftime('%H', (SELECT sun_start FROM pv_activity)) THEN 0.85
                    WHEN strftime('%H', 'now') = strftime('%H', (SELECT sun_end FROM pv_activity)) THEN 0.70
                    ELSE 1.0 
                  END)
                + (day_max - h_hour_curr)
            )
        END, 2),
        'pv_start', (SELECT sun_start FROM pv_activity),
        'pv_ende', (SELECT sun_end FROM pv_activity)
    )
) as json FROM final_data WHERE day_max > 0;

If you have created your own template sensor for the degree of cloudiness as presented above, this must first collect data for a few days. The variable 'weather.forecast_home' as sensor_clouds can then be replaced with the template sensor:   

...
 'sensor.weather_cloud_coverage' as sensor_clouds,
...

This means that more than 10 days can be used to determine the PV forecast.

The SQL query in Home Assistant has a single task: It searches the last 60 days of your database for days that are weather-related "twins" of today.
Sun window (pv_activity): SQL looks at when your system supplied electricity for the first and last time yesterday. The cloud cover is only compared in this period (e.g. 06:30 to 18:30).
The cloud cover comparison (matching): The query takes the weather report for today (e.g. 40% clouds) and sorts all past days according to how close they were to this 40%. The days with the lowest deviation end up at the top of a list (JSON).

Value template

[+]
{# PV-PROGNOSE LOGIK: Berechnet den Rest-Ertrag basierend auf historisch ähnlichen Tagen #}
{% set raw = value %}

{% if raw and raw != '[]' and raw is not none %}
  {% set data = raw | from_json %}
  

    {# --- 1. BASIS-DATEN --- #}
    {% set f_avg = data[0].f_avg_heute_rest | float(default=50.0) %}
    {% set current_month = now().month %}
    {% set schnee_faktor_heute = 1.0 %}

    {# --- 2. SAISONALE SCHNEE-ERKENNUNG (Nur Dez, Jan, Feb) --- #}
    {% if current_month in [12, 1, 2] %}
      {% set gestern_datum = (now() - timedelta(days=1)).strftime('%Y-%m-%d') %}
      {% set gestern_data = data | selectattr('datum', 'equalto', gestern_datum) | list | first %}

      {% if gestern_data is defined %}
        {% set y_rest_gestern = gestern_data.ertrag_tag_rest | float(default=0) %}
        {% set h_rest_gestern = gestern_data.h_avg_rest | float(default=0) %}
        {% set perf_gestern = y_rest_gestern / ([105 - h_rest_gestern, 5] | max) %}
        {% if perf_gestern < 0.02 %}
          {% set schnee_faktor_heute = 0.1 %}
        {% endif %}
      {% endif %}
    {% endif %}

    {# --- 3. ASTRONOMISCHE BASISDATEN (ortsgenau via Breitengrad aus HA-Standort) --- #}
    {# latitude wird als Template-Variable vom Sensor übergeben (hass.config.latitude) #}
    {% set day_of_year = now().strftime('%j') | int(default=1) %}
    {% set lat_rad = latitude * pi / 180 %}
    {% set decl = -0.4093 * cos(2 * pi * (day_of_year + 10) / 365) %}
    {% set cos_ha = -tan(lat_rad) * tan(decl) %}
    {% set dl_today = 24 / pi * acos([[cos_ha, -1.0] | max, 1.0] | min) %}
    {% set sun_today = 0.65 + 0.35 * cos((day_of_year - 172) * 2 * pi / 365) %}

    {# --- 4. DATEN-POOL AUFBEREITEN --- #}
    {% set ns_pool = namespace(items=[], total_w=0) %}
    {% for item in data %}
      {% set yield_raw = item.ertrag_tag_rest | float(default=0) %}
      {% set clouds = item.h_avg_rest | float(default=0) %}
      {% set dt_item = as_datetime(item.datum) %}
      
      {% if dt_item is not none %}
        {% set item_day = dt_item.strftime('%j') | int(default=1) %}
        {% set decl_i = -0.4093 * cos(2 * pi * (item_day + 10) / 365) %}
        {% set cos_ha_i = -tan(lat_rad) * tan(decl_i) %}
        {% set dl_item = 24 / pi * acos([[cos_ha_i, -1.0] | max, 1.0] | min) %}
        {% set sun_item = 0.65 + 0.35 * cos((item_day - 172) * 2 * pi / 365) %}
        {% set s_korr = (sun_today / sun_item) * (dl_today / dl_item) %}
        {% set diff = (clouds - f_avg) | abs %}
        {% set w = 1 / ([diff, 0.5] | max) %}

        {% if yield_raw > 0.05 or clouds > 95 or current_month in [12, 1, 2] %}
          {% set ns_pool.total_w = ns_pool.total_w + w %}
          {% set ns_pool.items = ns_pool.items + [{'h_avg': clouds, 'y_korr': yield_raw * s_korr, 'w': w}] %}
        {% endif %}
      {% endif %}
    {% endfor %}

    {# --- 5. PROGNOSE-BERECHNUNG --- #}
    {% set pool = ns_pool.items %}
    {% set brighter = pool | selectattr('h_avg', 'lt', f_avg) | list %}
    {% set darker = pool | selectattr('h_avg', 'gt', f_avg) | list %}
    {% set res = 0 %}

    {% if brighter | count > 0 and darker | count == 0 %}
      {% set worst_day = brighter | sort(attribute='y_korr') | first %}
      {% set res = worst_day.y_korr * ([120 - f_avg, 5.0] | max / [120 - worst_day.h_avg, 5.0] | max) %}
    {% elif darker | count > 0 and pool | selectattr('h_avg', 'le', f_avg) | list | count == 0 %}
      {% set res = darker | map(attribute='y_korr') | max %}
    {% elif pool | count > 0 %}
      {% set ns_mix = namespace(ws=0) %}
      {% for item in pool %}
        {% set ns_mix.ws = ns_mix.ws + (item.y_korr * item.w) %}
      {% endfor %}
      {% set res = ns_mix.ws / (ns_pool.total_w if ns_pool.total_w > 0 else 1) %}
    {% endif %}

    {# --- 6. FINALE SKALIERUNG --- #}
    {% set final_val = (res / (1000 if res > 200 else 1)) * schnee_faktor_heute %}
    {{ final_val | round(2) }}

{% else %}
  0.0
{% endif %}

SQL only provides us with raw historical figures. The template processes these:
Seasonal sun correction: The sun is much higher in June than in February, so the angle of incidence and the length of day are corrected.
Snow detection (winter logic): If it is December to February and your system delivered almost 0W yesterday despite the light, the system assumes that there is snow on the panels. Today's forecast is then immediately reduced by 90%.
Intelligent averaging:
Are there days in the archive that were brighter AND darker? Then a weighted average is calculated.
Is today's weather forecast more beautiful than anything we have seen in the last 60 days? Then a light reduction is subtracted from the best available day.

Markdown

[+]
{# =================================================================
   PV remaining yield today  Lovelace Markdown Card (Option B: Inline template)
   Source sensor:   sensor.pv_hist_remaining_today  (attribute: sql_raw_json)
   Forecast sensor: sensor.pv_hist_weather_forecast (attribute: forecast)

   RECOMMENDED: Use Option A instead of this inline template:
   {{ state_attr('sensor.pv_hist_remaining_today', 'lovelace_card') }}

   Option B: Use this content directly as a Lovelace Markdown card.
   ================================================================= #}
{% set raw_json = state_attr('sensor.pv_remaining_states', 'json') %}
{% if raw_json and raw_json != '[]' and raw_json is not none %}
  {% set data = raw_json | from_json %}

  {% if data | length > 0 %}
    {% set f_avg = data[0].f_avg_heute_rest | float(default=50.0) %}

    {# 1. SAISONALE SCHNEE-ERKENNUNG (Dez / Jan / Feb) #}
    {% set current_month = now().month %}
    {% set schnee_faktor_heute = 1.0 %}
    {% if current_month in [12, 1, 2] %}
      {% set gestern_datum = (now() - timedelta(days=1)).strftime('%Y-%m-%d') %}
      {% set gestern_data = data | selectattr('datum', 'equalto', gestern_datum) | list | first %}
      {% if gestern_data is defined %}
        {% set y_rest_gestern = gestern_data.ertrag_tag_rest | float(default=0) %}
        {% set h_rest_gestern = gestern_data.h_avg_rest | float(default=0) %}
        {% set perf_gestern = y_rest_gestern / ([105 - h_rest_gestern, 5] | max) %}
        {% if perf_gestern < 0.02 %}{% set schnee_faktor_heute = 0.1 %}{% endif %}
      {% endif %}
    {% endif %}

    {# 2. ASTRONOMISCHE BASISDATEN (Breitengrad aus zone.home) #}
    {% set latitude = state_attr('zone.home', 'latitude') | float(48.0) %}
    {% set doy = now().strftime('%j') | int(default=1) %}
    {% set lat_rad = latitude * pi / 180 %}
    {% set decl = -0.4093 * cos(2 * pi * (doy + 10) / 365) %}
    {% set cos_ha = -tan(lat_rad) * tan(decl) %}
    {% set dl_today = 24 / pi * acos([[cos_ha, -1.0] | max, 1.0] | min) %}
    {% set sun_today = 0.65 + 0.35 * cos((doy - 172) * 2 * pi / 365) %}

    {# 3. POOL AUFBAUEN #}
    {% set ns_pool = namespace(items=[], total_w=0) %}
    {% for item in data %}
      {% set yield_raw = item.ertrag_tag_rest | float(default=0) %}
      {% set clouds = item.h_avg_rest | float(default=0) %}
      {% set clouds_gesamt = item.h_avg_gesamt | float(default=0) %}
      {% set item_dt = as_datetime(item.datum) %}
      {% if item_dt is not none %}
        {% set item_day = item_dt.strftime('%j') | int(default=1) %}
        {% set decl_i = -0.4093 * cos(2 * pi * (item_day + 10) / 365) %}
        {% set cos_ha_i = -tan(lat_rad) * tan(decl_i) %}
        {% set dl_item = 24 / pi * acos([[cos_ha_i, -1.0] | max, 1.0] | min) %}
        {% set sun_item = 0.65 + 0.35 * cos((item_day - 172) * 2 * pi / 365) %}
        {% set s_korr = (sun_today / sun_item) * (dl_today / dl_item) %}
        {% set diff = (clouds - f_avg) | abs %}
        {% set w = 1 / ([diff, 0.5] | max) %}
        {% if yield_raw > 0.05 or clouds > 95 or current_month in [12, 1, 2] %}
          {% set ns_pool.total_w = ns_pool.total_w + w %}
          {% set ns_pool.items = ns_pool.items + [{'datum': item.datum, 'h_avg': clouds, 'h_avg_gesamt': clouds_gesamt, 'y_korr': yield_raw * s_korr, 's_fakt': s_korr, 'w': w, 'ertrag_tag_gesamt': item.ertrag_tag_gesamt, 'filtered': false}] %}
        {% else %}
          {% set ns_pool.items = ns_pool.items + [{'datum': item.datum, 'h_avg': clouds, 'h_avg_gesamt': clouds_gesamt, 'y_korr': yield_raw * s_korr, 's_fakt': s_korr, 'w': 0, 'ertrag_tag_gesamt': item.ertrag_tag_gesamt, 'filtered': true}] %}
        {% endif %}
      {% endif %}
    {% endfor %}

    {% set pool = ns_pool.items | selectattr('filtered', 'equalto', false) | list %}
    {% set brighter = pool | selectattr('h_avg', 'lt', f_avg) | list %}
    {% set darker = pool | selectattr('h_avg', 'gt', f_avg) | list %}
    {% set res = 0 %}
    {% set methode = "No data" %}

    {# 4. Decision logic #}
    {% if brighter | count > 0 and darker | count == 0 %}
      {% set methode = "Light reduction" %}
      {% set worst_day = brighter | sort(attribute='y_korr') | first %}
      {% set res = worst_day.y_korr * ([120 - f_avg, 5.0] | max / [120 - worst_day.h_avg, 5.0] | max) %}
    {% elif darker | count > 0 and pool | selectattr('h_avg', 'le', f_avg) | list | count == 0 %}
      {% set methode = "Max assumption" %}
      {% set res = darker | map(attribute='y_korr') | max %}
    {% elif pool | count > 0 %}
      {% set methode = "Weighted average" %}
      {% set ns_mix = namespace(ws=0) %}
      {% for item in pool %}
        {% set ns_mix.ws = ns_mix.ws + (item.y_korr * item.w) %}
      {% endfor %}
      {% set res = ns_mix.ws / (ns_pool.total_w if ns_pool.total_w > 0 else 1) %}
    {% endif %}

    {% set scale = 1000 if res > 200 else 1 %}
    {% set final_val = (res / scale) * schnee_faktor_heute %}

**Forecast:**
## {{ final_val | round(2) }} kWh
*Basis: **{{ f_avg }}%** clouds | **{{ methode }}***
{% if schnee_faktor_heute < 1.0 %}⚠️ **Snow suspected! ({{ (schnee_faktor_heute * 100) | round(0) }}%)**{% endif %}

| Date | Day clouds | Day yield | Rem. clouds | Rem. yield | Weight |
| :--- | :---: | :---: | :---: | :---: | :---: |
{%- for item in ns_pool.items | sort(attribute='w', reverse=True) %}
| {{ item.datum }} | {{ item.h_avg_gesamt }}% | {{ item.ertrag_tag_gesamt }} | **{{ item.h_avg }}%** | **{{ ((item.y_korr * schnee_faktor_heute) / scale) | round(2) }} <small><small>({{ item.s_fakt | round(2) }}x)</small></small>**{% if item.filtered %}❌{% endif %} | {{ (((item.w / ns_pool.total_w) * 100) if ns_pool.total_w > 0 else 0) | round(1) }}% |
{%- endfor %}

  {% else %}
**No data in SQL result.**
  {% endif %}
{% else %}
**Waiting for SQL data...**
{% endif %}

Tomorrow sensor

Forecast for tomorrow: Helper - Template Sensor:

[+]
{# --- DATENABRUF AUS SQL-SENSOR --- #}
{% set raw_json = state_attr('sensor.pv_remaining_statistics', 'json') %}

{% if raw_json and raw_json != '[]' and raw_json is not none %}
  {% set data = raw_json | from_json %}
  
  {# --- 1. DURCHSCHNITTLICHE BEWÖLKUNG MORGEN (bereits von SQL korrekt in UTC berechnet) --- #}
  {# Direktübernahme aus SQL-Daten vermeidet UTC/Lokalzeit-Fehler beim Forecast-Vergleich #}
  {% set f_avg_morgen = data[0].f_avg_morgen | float(default=50.0) %}

  {# ASTRONOMISCHE BASISDATEN MORGEN (ortsgenau via Breitengrad) #}
  {# latitude wird als Template-Variable vom Sensor übergeben (hass.config.latitude) #}
  {% set day_morgen = (now() + timedelta(days=1)).strftime('%j') | int %}
  {% set lat_rad = latitude * pi / 180 %}
  {% set decl_m = -0.4093 * cos(2 * pi * (day_morgen + 10) / 365) %}
  {% set dl_morgen = 24 / pi * acos([[(-tan(lat_rad) * tan(decl_m)), -1.0] | max, 1.0] | min) %}
  {% set sun_morgen = 0.65 + 0.35 * cos((day_morgen - 172) * 2 * pi / 365) %}

  {# --- 3. POOL MATCHING (HISTORISCHER VERGLEICH) --- #}
  {# Wir vergleichen den Forecast von morgen mit den Gesamterträgen der Vergangenheit #}
  {% set ns_pool = namespace(items=[], total_w=0) %}
  {% for item in data %}
    {% set yield_total = item.ertrag_tag_gesamt | float %}
    {% set clouds_hist = item.h_avg_gesamt | float %}
    
    {# Saisonale Korrektur: Skaliert den historischen Ertrag auf das Sonnen-Niveau von morgen #}
    {% set item_day = as_datetime(item.datum).strftime('%j') | int %}
    {% set decl_i = -0.4093 * cos(2 * pi * (item_day + 10) / 365) %}
    {% set dl_item = 24 / pi * acos([[(-tan(lat_rad) * tan(decl_i)), -1.0] | max, 1.0] | min) %}
    {% set sun_item = 0.65 + 0.35 * cos((item_day - 172) * 2 * pi / 365) %}
    {% set s_korr = (sun_morgen / sun_item) * (dl_morgen / dl_item) %}
    
    {# Gewichtung: Je näher die Bewölkung beieinander liegt, desto stärker das Gewicht #}
    {% set diff = (clouds_hist - f_avg_morgen) | abs %}
    {% set w = 1 / ([diff, 0.5] | max) %}
    
    {% set ns_pool.total_w = ns_pool.total_w + w %}
    {% set ns_pool.items = ns_pool.items + [{'y_korr': yield_total * s_korr, 'h_avg': clouds_hist, 'w': w}] %}
  {% endfor %}

  {# --- 4. ENTSCHEIDUNGSLOGIK --- #}
  {% set pool = ns_pool.items %}
  {% set brighter = pool | selectattr('h_avg', 'lt', f_avg_morgen) | list %}
  {% set darker = pool | selectattr('h_avg', 'gt', f_avg_morgen) | list %}
  {% set res = 0 %}

  {% if brighter | count > 0 and darker | count == 0 %}
    {# Fall A: Morgen wird dunkler als alle Tage im Pool -> Licht-Reduktion basierend auf dem schlechtesten Tag #}
    {% set worst_day = brighter | sort(attribute='y_korr') | first %}
    {% set res = worst_day.y_korr * ([120 - f_avg_morgen, 5.0] | max / [120 - worst_day.h_avg, 5.0] | max) %}
    
  {% elif darker | count > 0 and brighter | count == 0 %}
    {# Fall B: Morgen wird schöner als alle Tage im Pool -> Vorsichtige Max-Annahme #}
    {% set res = darker | map(attribute='y_korr') | max %}
    
  {% elif pool | count > 0 %}
    {# Fall C: Gemischter Pool -> Gewichteter Mittelwert aller Vergleichstage #}
    {% set ns_mix = namespace(ws=0) %}
    {% for item in pool %}
      {% set ns_mix.ws = ns_mix.ws + (item.y_korr * item.w) %}
    {% endfor %}
    {% set res = ns_mix.ws / ns_pool.total_w %}
  {% endif %}

  {# Ergebnis-Ausgabe: Wh in kWh konvertieren falls Wert sehr hoch ist (Logik-Check) #}
  {% set final_scale = 1000 if res > 200 else 1 %}
  {{ (res / final_scale) | round(2) }}

{% else %}
  {# Fallback wenn SQL-Daten fehlen #}
  0.0
{% endif %}

The result

You don't get a theoretical estimate, but a forecast based on the real performance of your hardware under your individual site conditions.
Advantage: The longer the system runs, the more precise it becomes, as the "pool" of historical twin days is constantly growing.

Conclusion

PV History Forecast is the smart alternative to external forecasting services for anyone who uses Home Assistant and already has historical PV yield data. The integration learns from the past of your own system and thus provides a significantly more system-specific residual forecast - without an external API, without registration, directly from your own HA database.

positive Bewertung({{pro_count}})
Rate Post:
{{percentage}} % positive
negative Bewertung({{con_count}})

THANK YOU for your review!

created by Bernhard | published: 2026-03-24 | Updated: 2026-03-24 | Übersetzung Deutsch |🔔 | Comments:0

Questions / Comments


 
By continuing to browse the site, you agree to our use of cookies. More Details