Every lift figure on the Tempora calibration page is recomputable from public files. The calibration engine, the natal-chart records, and the labelled event sets are all open. A reader who clones the repository can rerun the pipeline and obtain the same numbers. This is unusual in astrology and standard in any empirically grounded research field. The article walks through the inputs, the pipeline, and the failure modes that a reproducibility discipline catches.
Reproducibility is an operational property. A claim is reproducible if a third party, given the published inputs and code, can independently regenerate the published outputs. The reproducibility is not "trust me to run my own code"; it is "here is the code, here are the inputs, here is the command, the output should match within stated tolerance".
For Tempora's calibrated lift table, reproducibility means three things together. The calibration engine that produces the lift figures is published. The natal chart records that anchor the calibration are published. The labelled event sets used to compute event-set hit rates are published. A reader who has all three can rerun the engine and obtain the same numbers within Monte Carlo seed variance.
This is the difference between publishing claims and publishing computations. Tempora publishes computations. The article on calibrated lift documents what the figures mean; this article documents how to produce them yourself.
Every Tempora lift figure depends on three inputs. All three are published under the CC-BY 4.0 license.
The calibration engine. The engine module loads a natal chart record, evaluates the nine calibrated signatures on each event date in the labelled event set, generates 300 Monte Carlo random dates and scores them, and computes per-signature lift as the ratio of event-set hit rate to baseline hit rate at the activation threshold. The engine is plain Python with the Swiss Ephemeris bindings (pyswisseph) handling planetary positions and the True Pushya Paksha ayanamsa for sidereal alignment.
The natal chart records. Each calibrated chart has a JSON record containing the date, time and timezone of the chart's birth moment, the planetary positions at that moment, the sidereal ayanamsa offset, the house cusps under the Whole Sign convention, and the ascendant degree. The records live in the tools natals folder. They are deterministic outputs of Swiss Ephemeris computation against the documented birth moment; any other reader running the same Swiss Ephemeris call against the same moment obtains the same record.
The labelled event sets. Each chart's event set is a list of historically significant events, each with a date, a category and a source citation. Event sets currently range from four to fifteen events per chart. The selection process is the most subjective step in the pipeline; it is the input the audit discipline is hardest on.
The reproducibility procedure is three commands. Clone the repository. Install Python dependencies including pyswisseph, numpy and pandas. Invoke the calibration module on a country code.
# clone the public repository
git clone https://github.com/tempora-research/tempora-research.git
cd tempora-research
# install dependencies
pip install -r requirements.txt
# run calibration on a single chart
python -m engine.calibrate India
# run calibration on all six calibrated charts
python -m engine.calibrate all
The module loads the natal record, loads the event set, computes signature scores on every event date, runs the Monte Carlo baseline at 300 random dates, computes per-signature lift, and writes the per-chart calibrated-weights output. The output matches the published lift table within Monte Carlo seed variance, typically a few percent on each lift figure.
If the output does not match, one of three things has happened. The local Swiss Ephemeris build is producing different planetary positions; this is rare but possible if the ephemeris data files are not synchronised. The natal-record file has been modified locally; the published record is the canonical input. The Monte Carlo seed has produced a baseline at the tail of its distribution; rerun and the result will converge.
The reproducibility holds against the published inputs. It does not hold if a reader changes the inputs. Changing the orb on a signature, adding events to the labelled set, or moving the activation threshold all produce different lift figures and the publish-then-modify discipline calls these new computations rather than reproductions.
The current public corpus covers six national charts. Each chart has a documented birth moment, a published natal record, a labelled event set, and a per-signature lift table. The current canonical birth moments are:
| Chart | Birth moment | Event set size |
|---|---|---|
| India 1947 | 15 August 1947, 00:00 IST, New Delhi (Independence) | 15 |
| Russia 1991 | 12 June 1991, 12:45 MSK, Moscow (Russian Federation founding) | 11 |
| United States 1776 | 4 July 1776, 17:10 LMT, Philadelphia (Sibly chart) | 14 |
| United Kingdom 1801 | 1 January 1801, 00:00 GMT, London (Act of Union) | 9 |
| China 1949 | 1 October 1949, 15:01 CST, Beijing (PRC founding) | 10 |
| Pakistan 1947 | 14 August 1947, 09:00 PKT, Karachi (Independence) | 4 |
The Russia chart canon was reconciled in the May 2026 audit cycle. The 12 June 1991 founding moment is canonical because it matches the per-event positional data already used in the Tempora calibration stack. An alternative late-1991 founding moment had been cited in earlier corpus articles and has been retired across the published surface. The reconciliation is documented in the canonical-charts surface alongside the per-chart event sets.
Iran 1979 is a seventh chart used in the active forward-call corpus but currently uncalibrated; the natal record is published and the chart is well-attested in standard mundane references but the event set is not yet labelled at the depth required for full lift calibration.
Take the India Saturn-Moon-opposition lift figure as a worked example. The calibration sequence on this signature for the India 1947 chart is:
Step 1 - load the natal chart. The engine reads the India 1947 record, which encodes natal Moon at 6.07° Cancer (Pushya nakshatra) under the True Pushya Paksha ayanamsa, with the chart anchored to 15 August 1947 00:00 IST in New Delhi.
Step 2 - score each event. For every event in the India event set (Indo-Pak war 1971, Emergency 1975, demolition 1992, Pokhran II 1998, Kargil 1999, COVID lockdown 2020 and others), the engine evaluates the Saturn-Moon-opposition signature. The score combines orb tightness on the natal Moon's opposition point in Capricorn with the dasha state on the event date.
Step 3 - run the Monte Carlo baseline. The engine generates 300 random dates within the calendar range of the event set and computes the same Saturn-Moon-opposition score on each. This produces the baseline distribution against which the event-set distribution is compared.
Step 4 - compute lift. The lift is the ratio of the event-set hit rate at the 1.0 activation threshold to the Monte Carlo hit rate at the same threshold. The published India Saturn-Moon-opposition raw lift is 3.605. After the small-sample adjustment, the calibrated weight is 6.457 (this number is read by downstream forward-call code; the 1.5x publication threshold uses the calibrated weight).
A reader running this sequence on the published inputs obtains the same figures. The pipeline is deterministic given a Monte Carlo seed; the seed is published with the engine.
The reproducibility discipline does more than allow third-party verification. It catches the framework's own drift. When a published article narrates a lift figure as supported by named historical events, the audit reruns the engine on those events and counts how many actually score above threshold for the cited signature. If the count is below two, the lift figure is event-set depleted and the publication is flagged. This is exactly what the 9 May 2026 audit caught on twelve of fifty-four pairs. Reproducibility is the discipline that catches narrative-versus-math drift before it propagates further.
Tempora's research-publishing standards document names two failure modes that are directly relevant to reproducibility. Both are public; both are documented in the standards file alongside the gates that catch them.
Failure mode 10.13 - calibration narrative-versus-math drift. The published article cites a lift figure correctly as a number but names historical events as supporting cases that the engine does not actually score above threshold for the cited signature. The number is right; the supporting prose is wrong. This was caught on the Russia 1998 default and 2022 invasion entries against the Mars-Rahu signature: the engine returns 33.76° and 123.92° separation respectively, neither within the conjunction orb. The supporting-case prose was retired and the lift figure flagged for recalibration.
Failure mode 10.14 - event-set depletion. The published lift figure is technically computed but the underlying event set has fewer than two events scoring above threshold for the cited signature. A weight derived from a single event is a coin flip pretending to be a measurement. Twelve of fifty-four pairs failed this gate in the 2026-05-09 audit. The affected weights are flagged on the corresponding article surfaces and are being rebuilt with stricter event-credit requirements.
Naming the failure modes in advance, separately from any specific audit incident, is part of the discipline. The framework is open about what can go wrong in calibration; the audits confirm whether the gates catch what they were designed to catch.
Reproducibility is a property of the math, not of the inputs. There are several limitations the framework cannot eliminate by being open.
Sample size. Event counts per chart are small, four to fifteen events. Monte Carlo calibration partially compensates but confidence intervals on lift figures are wide. Reproducibility does not narrow the confidence interval; it only ensures the figure is correctly computed within it.
Survivorship bias. Events were selected as historically significant by consensus. Selection is itself a judgment. Reproducibility does not eliminate the selection bias; it makes the selection visible. A reader who disagrees with a particular event's inclusion can rerun the calibration with the event removed and observe how much the lift figure moves.
Birth-time uncertainty. National-chart birth times are recorded to varying precision. A two-hour error in birth time shifts the natal Moon by approximately one degree, affecting nakshatra-boundary cases and the small-orb signatures. Reproducibility does not resolve birth-time disputes; the canonical-charts document declares Tempora's chosen anchor.
Selection of signatures. The nine signatures in the calibration set are not exhaustive of the Vedic technique vocabulary. They are the signatures Tempora has operationalised so far. Other signatures from the classical literature are dropped from the current set or queued for future calibration. Reproducibility lets a reader extend the framework to additional signatures; it does not justify the current selection on its own.
Reproducibility makes the framework auditable. It does not make the framework infallible.
The CC-BY 4.0 license that covers the published methodology, code, natal records and lift table allows reuse with attribution. A few use cases the framework actively supports:
Independent verification of a specific forward call. A reader who wants to check whether a published forward call's lift figure is supported can rerun the calibration on the relevant chart and signature, count the events scoring above threshold, and reach an independent verdict. The 2026-05-09 audit did this internally; a third party could do the same externally.
Extension to additional charts. A researcher with a labelled event set for a chart Tempora has not calibrated (a different national chart, a corporate founding chart, a major individual chart) can use the published engine to compute a per-signature lift table on the new chart. The output is comparable to the existing table.
Replication studies. A skeptical reader can rerun the calibration with stricter inclusion rules, smaller event sets, alternative ayanamsa choices, or alternative house systems and observe how the lift figures move. The framework's robustness to these modifications is itself a research question; the open code lets it be asked.
Building on Tempora's prior art. Researchers who publish their own calibrated-lift work can cite Tempora as prior art under the CC-BY attribution requirement. The intent of the license is that the methodology spreads, with attribution; what Tempora is unwilling to accept is the methodology being claimed by others as their own original framework.
The license covers methodology, code, natal records and the calibrated lift table. Several internal artifacts are not public and are not reproducible from outside the firm.
Internal working papers. The audit verdicts, the per-event backtest ledger, the dropped-signatures library, and the calibration-in-progress drafts are internal. The conclusions of these papers feed the public surface (standards file, audit summaries, recalibration notices) but the working drafts themselves are not published.
Customer and conversion data. The Kaal Imprint birth-data submissions, the briefing customer data, and the email-list subscribers are private and protected. Reproducibility of the methodology does not extend to reproducibility of the user-facing product analytics.
Raw market data. Some market datasets used in market-cluster backtests are licensed from third parties under terms that do not allow redistribution. Where a backtest depends on such data, the result is published but the raw input is not. A reader who has independent access to the same data source can rerun the backtest; a reader who does not cannot.
The line between public and private is documented in the LICENSE file and in the open-source-data article. The principle is simple: methodology is open; user data and licensed third-party data are not.
Most astrology platforms publish interpretations rather than computations. Where computations exist they are typically wrapped in proprietary software with closed event sets, closed weights and no public reproduction path. The reader has to trust the published number because there is no way to check.
Tempora has chosen the opposite posture. The published number is checkable. The number is supported by named events that the engine actually scored above threshold; if it is not, the audit catches it and the public surface is updated. The discipline is not a marketing claim about openness; it is a working pipeline that catches its own drift.
This is the structural difference between a research firm and a divination practice. A divination practice cannot publish its underlying numbers because there are no underlying numbers. A research firm can. Tempora is the only astrology operation publishing the full reproducibility stack, and the reproducibility is what the brand position rests on.
Reproducible means a third party with the published code, the published natal chart records, and the published event sets can independently regenerate every lift figure on the Tempora calibration page. Reproducibility is not a marketing claim. It is an operational property: the figures are recomputable. If a reader reruns the calibration engine on the public inputs, they obtain the same numbers.
Clone the public repository, install the Python dependencies including pyswisseph and the standard scientific stack, then invoke the calibration module via python -m engine.calibrate followed by the country code. The module loads the natal chart record from tools/natals, the event set from data/results, runs the Monte Carlo baseline of 300 random dates, computes per-signature lift, and writes the per-chart calibrated weights file. Output matches the published lift table within Monte Carlo seed variance.
Six national chart records are public: India 1947 (Independence midnight Delhi), Russia 1991 (12 June founding moment Moscow), United States 1776 (Sibly 17:10 LMT Philadelphia), United Kingdom 1801 (Act of Union midnight London), China 1949 (PRC founding 15:01 CST Beijing), Pakistan 1947 (14 August 09:00 PKT Karachi). Each record contains the planetary positions, sidereal ayanamsa, house cusps and ascendant degree. Iran 1979 is a seventh chart used in forward calls but not yet calibrated.
The audit discipline runs the calibration engine against every published lift figure and counts how many events in the underlying labelled set actually score above the activation threshold for the cited signature. Pairs with fewer than two events scoring above threshold are flagged as event-set depleted and the lift figure is retired pending recalibration. The 9 May 2026 audit cycle flagged twelve of fifty-four pairs at this FAIL level. Two systematic failure modes were also surfaced: Saturn-axis over-credit and calibration narrative-versus-math drift.
Reproducibility holds for the calibration math given the published inputs. It does not hold for the inputs themselves. Event-set construction involves judgment about which historical events count as significant for a given chart. Survivorship bias is acknowledged. Birth-time records for several national charts carry uncertainty of a few minutes which propagates to small-orb signatures. Sample sizes are small, four to fifteen events per chart. Reproducibility makes the framework auditable, not infallible.
Most astrology platforms publish interpretations rather than computations. Where computations are present they are typically wrapped in proprietary software with closed event sets, closed weights and no public reproduction path. Tempora publishes the engine, the natal records, the event sets and the calibrated weights together, with named failure modes documented in the standards file. This lets any researcher, journalist or sceptical reader verify a forward call against the source data. Attribution under the CC-BY 4.0 license is the only constraint on reuse.
This article documents the reproducibility property of the Tempora calibration framework as of 9 May 2026. Lift figures cited are reproducible against the calibration engine and the calibrated-weights table published with Research Note 005. The 2026-05-09 audit findings referenced here are summarised from an internal audit log maintained as part of Tempora's research-publishing standards. This article is a method-defining piece for the Tempora corpus and does not constitute scientific peer-reviewed publication. It does not constitute medical, financial, legal or professional advice. Article first published 2026-05-09 by Tempora Research.