How I Hooked Oil Prices, Shock Events, and Recovery Phases Together
Sometimes the useful bit is not the chart.
It is the structure behind the chart.
For this project, I wanted more than a historical oil price line. I wanted a way to connect:
- price history
- real-world shock events
- a reusable recovery model
So instead of hardcoding everything into one file, I split it into three separate datasets and then joined them in the component.
That gave me something much more useful:
not just “oil moved here,” but “this event happened here, and these were the phases that followed.”
This post is the practical how-to for how I wired that together.
The three things I separated
I split the project into three logical parts.
1. Price series
The component fetches price history from a processed JSON endpoint:
const S3_BASE = "https://oil-pipeline-processed-786230492402.s3.eu-west-2.amazonaws.com";
const URL_PRICES = `${S3_BASE}/processed/latest.json`;
That file becomes the backbone of the whole thing. In the component, the returned series array is used to build date labels plus Brent and WTI arrays.
const priceJson = await priceRes.json();
const series = priceJson.series || [];
priceData = {
labels: sampled.map(r => r.date),
brent: sampled.map(r => r.brent_close),
wti: sampled.map(r => r.wti_close),
};
That gives me a clean time-series input.
2. Shock event metadata
Then I keep the actual shocks in a separate event file:
const URL_EVENTS = `${S3_BASE}/static/global_shock_events.json`;
The component fetches that alongside the prices instead of embedding event text directly in the code.
const eventsJson = eventsRes.ok ? await eventsRes.json() : { events: [] };
events = eventsJson.events || [];
That matters because events are content, not logic.
If I want to add a new geopolitical disruption later, I can update the event JSON without rewriting the component.
3. Phase modelling
The third file is where the project becomes more than a basic annotation layer.
I created a separate shock_phases.json file that joins back to the event list using event_id. The file even documents that relationship directly in the description: it is a three-phase breakdown for each oil shock and joins on event_id against the shock events file.
Each entry looks like this:
{
"event_id": "russia_ukraine_2022",
"event_title": "Russia-Ukraine war",
"spike": { "start": "2022-02-24", "end": "2022-03-08", "note": "Brent surges from $95 to $127 in under two weeks." },
"normalization":{ "start": "2022-03-08", "end": "2022-06-01", "note": "Prices plateau in the $100–120 range as sanctions are absorbed." },
"restoration": { "start": "2022-06-01", "end": "2022-12-31", "note": "Demand concerns and SPR releases push Brent back toward $80–85." }
}
That pattern is used repeatedly across the phase file for events like the Gulf War, the 2008 oil spike, COVID demand shock, Red Sea disruption, and the 2024 Iran-Israel escalation.
Why I kept them separate
This is the bit that matters most.
A lot of quick projects dump everything into one file:
- dates
- event labels
- notes
- chart logic
- colors
- interaction state
That works for about five minutes.
Then it becomes a pain to maintain.
By splitting the system into:
- price data
- event metadata
- phase definitions
I made each part responsible for one thing.
That means:
- I can refresh price data without touching event text
- I can add or amend shocks without editing rendering logic
- I can refine the recovery model without changing the raw series
That is cleaner engineering, but it is also better data science.
Because the model becomes inspectable.
The fetch pattern
The first practical step was fetching all three sources together.
I used Promise.all() so the component loads the three datasets in parallel.
const [priceRes, eventsRes, phasesRes] = await Promise.all([
fetch(URL_PRICES),
fetch(URL_EVENTS),
fetch(URL_PHASES),
]);
Then each response is parsed separately:
const priceJson = await priceRes.json();
const eventsJson = eventsRes.ok ? await eventsRes.json() : { events: [] };
const phasesJson = phasesRes.ok ? await phasesRes.json() : { phases: [] };
This is a simple pattern, but it is a good one:
- fetch all inputs at once
- fail hard on the essential dataset
- fail softly on secondary enrichment files if needed
In my case, the prices are mandatory, while events and phases can safely default to empty structures.
Building the join
Once the phase file is loaded, I convert it into a lookup map keyed by event_id.
for (const p of (phasesJson.phases || [])) {
phasesMap[p.event_id] = {
spike: p.spike,
normalization: p.normalization,
restoration: p.restoration,
};
}
That one step is the core join.
Instead of repeatedly searching the full phase array every time a user interacts with an event, I prepare a dictionary-style object up front.
That gives me fast access later:
const phases = phasesMap[activeEvent];
This is a small thing, but it is exactly the kind of small thing that makes interactive data products feel cleaner and easier to reason about.
The data model I used
The phase model is deliberately simple.
Every shock has up to three named sections:
- spike
- normalization
- restoration
And each one contains:
- start
- end
- note
That makes the structure predictable.
For example, the 2024 Iran-Israel escalation is stored with very short windows, reflecting how quickly the premium appeared and faded.
The 2008 and COVID entries stretch much longer because those were not just brief headlines; they evolved into deeper market shifts.
The point is not that the model is academically perfect.
The point is that it is:
- consistent
- readable
- editable
- useful
That is usually what you want first.
Why I used event_id as the glue
The whole thing works because the datasets share a stable key.
That key is event_id.
The phase file explicitly says it joins on event_id, and the component uses that to line phases up with whichever event is active.
That avoids fuzzy matching on titles like:
- “Russia-Ukraine war”
- “Russia Ukraine”
- “Ukraine invasion”
Never join on text when you can join on an ID.
That one decision saves a lot of future pain.
The interaction logic
When an event is selected, the component looks up the related record in phasesMap, then renders each available section in order.
if (phases) {
["spike", "normalization", "restoration"].forEach(ph => {
const p = phases[ph];
if (!p) return;
// render note block
});
}
That means the UI logic is not deciding what the phases are.
The data is deciding.
That is exactly how I wanted it.
The component should be dumb enough to display the model, not reinvent it.
Why this is useful beyond one chart
This structure gives me reuse straight away.
Once I have:
- a price series
- an events file
- a keyed phase model
I can reuse the same pattern for other datasets too.
For example:
- gas prices
- heating oil
- shipping costs
- inflation spikes
- electricity prices
The principle is the same every time:
- keep the raw time series separate
- keep the event catalogue separate
- keep the interpreted phase model separate
- join with a stable ID
That turns a one-off visual into a repeatable pattern.
The quiet data-science win
The clever bit here is not machine learning.
It is structure.
I took something messy and mixed:
- prices
- geopolitics
- disruption narratives
- recovery timing
and turned it into a predictable model that can be queried, rendered, and extended.
That is real data-science work.
Not all useful data science is prediction.
Sometimes it is building a shape that reality can fit into.
What I would recommend if you build something similar
Keep it boring in the right places.
Use:
- one canonical time-series source
- one event catalogue
- one join key
- one simple phase schema
Do not try to make the component too clever.
Let the files do the talking.
If you want to add complexity later, great. But start with a structure you can explain to yourself six weeks later.
That is usually the real test.
Final thought
The reason I hooked it together this way is simple:
I did not want a chart that only showed movement.
I wanted a small system that could say:
- what happened
- when it happened
- what phase came next
- and how that event sat inside the wider price history
That is the difference between visualising data and structuring it.
And for this kind of project, the structure is the valuable bit.
Gareth Winterman