Dashboard¶
Data Loader¶
Data loading bridge between the scr_financial data pipeline and the dashboard.
Uses DataPreprocessor / EBACollector / ECBCollector / MarketDataCollector to build the dicts that BankingSystemSimulation expects.
- dashboard.data_loader.load_simulation_inputs(start_date='2020-01-01', end_date='2024-12-31', bank_list=None, snapshot_date=None)[source]¶
Load bank data, network data, and system indicators from the data pipeline.
Data API¶
Fast, API-based financial data fetcher for the SCR dashboard.
Data sources (no LLM required for any of these):
- yfinance — bank stock prices, return correlations (→ adjacency matrix A),
market cap, balance sheet financials (total assets, equity)
- ECB SDW — sovereign bond yields (IT, DE, FR, ES, NL, SE),
EUR/USD rate, ECB deposit facility rate
FRED — TED spread, VIX (systemic stress proxies)
Correlation-based edge weights¶
As per the SCG proposal (§2.1), edges are defined by the Pearson correlation of bank stock daily returns over a rolling window. This is real, daily data that is:
Updated automatically every trading day
Directly computable without regulatory disclosures
Already used in the SCG literature (Mantegna 1999, Tumminello 2007)
All network fetches are parallelised via ThreadPoolExecutor for speed.
- dashboard.data_api.fetch_correlation_adjacency(bank_ids=None, window_days=252, min_corr=0.3, pmfg=False)[source]¶
Build the correlation-based adjacency matrix from daily bank stock returns.
As per §2.1 of the SCG proposal: edges are Pearson correlations of returns. Weak edges (< min_corr) are removed (threshold filtering, §2.3).
- Parameters:
bank_ids (list of bank IDs to include (defaults to all 10))
window_days (rolling return window in trading days)
min_corr (threshold below which edges are zeroed out)
pmfg (if True, apply Planar Maximally Filtered Graph (slower but cleaner))
- Returns:
dict {source_id
- Return type:
{target_id: weight}} — upper-triangular correlation weights
- dashboard.data_api.fetch_bank_market_features(bank_ids=None)[source]¶
Fetch per-bank market and fundamental features from yfinance in parallel.
- dashboard.data_api.fetch_sovereign_spreads()[source]¶
Fetch 10Y sovereign bond yields from ECB SDW and compute IT-DE spread as a systemic stress proxy.
- Returns:
dict {country
- Return type:
latest_yield_pct, …, ‘IT_DE_spread’: float, ‘ES_DE_spread’: float}
- dashboard.data_api.fetch_system_indicators()[source]¶
Fetch system-level stress indicators from free public APIs in parallel.
- Returns a dict suitable for BankingSystemSimulation.system_indicators:
CISS — derived from IT-DE spread and bank volatility funding_stress — from bank stock volatility index sovereign_stress — IT-DE 10Y spread eurusd — EUR/USD rate
- dashboard.data_api.fetch_all(bank_ids=None, correlation_window=252, min_corr=0.3)[source]¶
Full parallel fetch: adjacency matrix + node features + system indicators.
- dashboard.data_api.build_simulation_inputs_from_api(bank_ids=None, correlation_window=252, min_corr=0.3)[source]¶
Build (bank_data, network_data, system_indicators) directly from market APIs.
Same output format as
data_loader.load_simulation_inputs— can be used as a drop-in replacement when live data is preferred over the EBA pipeline.
- dashboard.data_api.build_daily_graph_snapshots(bank_ids=None, lookback_years=3, corr_window=60, min_corr=0.3, stride=1, progress_callback=None)[source]¶
Build daily graph snapshots from historical market data for GNN training.
Fetches multi-year daily prices once, then rolls through each trading day constructing:
Node features: [N, 5] per bank (volatility, return, log-price, beta_proxy, momentum)
Edge index + weight: from rolling correlation of returns
Spectral targets: lambda_2, spectral_gap, spectral_radius from the day’s graph
- Parameters:
lookback_years (int) – How many years of history to fetch (default 3 → ~750 trading days).
corr_window (int) – Rolling window for correlation-based adjacency (trading days).
min_corr (float) – Threshold for edge inclusion.
stride (int) – Step between consecutive snapshots (1 = every day, 5 = weekly).
progress_callback (callable(current, total)) – For UI progress updates.
- Return type:
list of snapshot dicts compatible with GNNPredictor.
Prediction¶
Prediction helpers for the Evolution page.
Generates GNN training data from real daily market snapshots (yfinance), trains the GNNPredictor (temporal GCN+LSTM), and builds SCG-vs-Basel comparison data.
- dashboard.prediction.generate_evolution_data(n_steps=500, source='market', corr_window=60, stride=1, progress_callback=None)[source]¶
Generate graph snapshots for GNN training.
- Parameters:
n_steps (int) – For ‘abm’ source: number of ABM steps. For ‘market’ source: ignored (uses all available daily data).
source (str) – ‘market’ — real daily data from yfinance (default, recommended) ‘abm’ — stochastic ABM simulation
corr_window (int) – Rolling correlation window for market data (trading days).
stride (int) – Day stride for market snapshots (1=daily, 5=weekly).
progress_callback (callable(current, total)) – For UI progress updates.
- Return type:
- dashboard.prediction.train_predictor(snapshots, seq_len=10, hidden_dim=64, num_gcn_layers=3, num_lstm_layers=2, epochs=200, lr=0.003, dropout=0.1, progress_callback=None)[source]¶
Train a GNNPredictor on graph snapshots.
Returns (predictor, train_metrics, test_metrics).
GNN Export¶
GNN dataset exporter for the SCR Financial Networks dashboard.
Builds a graph dataset from the current simulation state + LLM-fetched bank features and writes it to disk in multiple formats:
nodes.csv — node feature matrix (one row per bank) edges.csv — directed edge list with weights graph_data.json — full graph as JSON (PyG-loadable via custom loader) pyg_data.pt — torch_geometric.data.Data object (if PyG installed) metadata.json — feature names, bank labels, dataset provenance
Usage:
from dashboard.gnn_export import build_and_export
info = build_and_export(gnn_features, output_dir="data/gnn_datasets")
- dashboard.gnn_export.build_graph_tensors(bank_ids, node_data, edges, feature_cols=None)[source]¶
Convert node/edge dicts into numpy arrays ready for GNN consumption.
- Returns:
X (float32 [N, F] — node feature matrix (NaN-imputed with column mean))
edge_index (int64 [2, E] — source/target index pairs)
edge_attr (float32 [E, 1] — edge weight (normalised 0-1))
y (int64 [N, 2] — binary labels [solvent, liquid])
feat_names (list[str] — feature column names (matches X columns))
- Parameters:
- Return type:
- dashboard.gnn_export.build_and_export(gnn_features, sim_graph, output_dir='data/gnn_datasets', tag=None)[source]¶
Build a GNN dataset from LLM-fetched features + simulation graph and save it to output_dir.
- Parameters:
gnn_features ({bank_id: {feature: value}}) – Output of
fetch_bank_features_for_gnn().sim_graph ({nodes: [...], edges: [...]}) – Output of
simulation_state.get_network_graph_data().output_dir (str) – Directory to write dataset files into.
tag (str, optional) – Short label for the export (used in filenames). Defaults to a timestamp.
- Returns:
dict with keys
- Return type:
output_dir, files, n_nodes, n_edges, n_features, timestamp
Simulation State¶
Global simulation state for the dashboard.
Builds the BankingSystemSimulation from the live data pipeline (EBACollector → ECBCollector → DataPreprocessor) rather than any hard-coded demo values. Thread safety is achieved via a module-level lock.
- dashboard.simulation_state.get_data_source()[source]¶
Return ‘API’ or ‘EBA’ depending on how data was loaded.
- Return type:
- dashboard.simulation_state.reload_data(start_date=None, end_date=None, bank_list=None, snapshot_date=None)[source]¶
Re-fetch data from the pipeline with updated parameters.
- dashboard.simulation_state.load_from_data(bank_data, network_data, system_indicators)[source]¶
Replace simulation state with externally-fetched data (e.g. from data_api).
- dashboard.simulation_state.reset_simulation()[source]¶
Reset the ABM to the loaded data snapshot without re-fetching.
- Return type:
None
- dashboard.simulation_state.apply_shock_and_record(shock_params)[source]¶
Apply shock, record state immediately, return full history.
REST API¶
FastAPI backend for the SCR Financial Networks dashboard.
Endpoints¶
GET /health — liveness probe GET /simulation/state — current bank + system state POST /simulation/run — run N steps POST /simulation/shock — apply a named or custom shock POST /simulation/reset — reset to initial state GET /spectral — full spectral analysis POST /analysis/llm — LLM narrative analysis
Run with:
uvicorn dashboard.api:app --reload --port 8000
- class dashboard.api.RunRequest(*, steps=10, shock_scenario=None)[source]¶
Bases:
BaseModel- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class dashboard.api.ShockRequest(*, scenario=None, custom_params=None)[source]¶
Bases:
BaseModel- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class dashboard.api.ReloadRequest(*, start_date='2020-01-01', end_date='2024-12-31', bank_list=None, snapshot_date=None)[source]¶
Bases:
BaseModel- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class dashboard.api.LLMRequest(*, model=None, api_key=None)[source]¶
Bases:
BaseModel- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- dashboard.api.get_state()[source]¶
Return current bank states, system metrics, and network graph data.
- dashboard.api.run_simulation(req)[source]¶
Run the simulation for steps steps.
- Parameters:
req (RunRequest)
- Return type:
- dashboard.api.apply_shock(req)[source]¶
Apply a named or custom shock to the simulation.
- Parameters:
req (ShockRequest)
- Return type:
- dashboard.api.reload_data(req)[source]¶
Re-fetch data from the pipeline with updated parameters.
- Parameters:
req (ReloadRequest)
- Return type: