Dashboard

Data Loader

Data loading bridge between the scr_financial data pipeline and the dashboard.

Uses DataPreprocessor / EBACollector / ECBCollector / MarketDataCollector to build the dicts that BankingSystemSimulation expects.

dashboard.data_loader.load_simulation_inputs(start_date='2020-01-01', end_date='2024-12-31', bank_list=None, snapshot_date=None)[source]

Load bank data, network data, and system indicators from the data pipeline.

Parameters:
  • start_date (str) – Date range for data collection (‘YYYY-MM-DD’).

  • end_date (str) – Date range for data collection (‘YYYY-MM-DD’).

  • bank_list (list of str, optional) – Bank IDs to include. Defaults to ALL_BANKS.

  • snapshot_date (str, optional) – Date to take the snapshot at. Defaults to end_date.

Returns:

  • bank_data (dict[bank_id -> state dict])

  • network_data (dict[bank_id -> {target_id: weight}])

  • system_indicators (dict)

Return type:

Tuple[Dict[str, Any], Dict[str, Dict[str, float]], Dict[str, Any]]

Data API

Fast, API-based financial data fetcher for the SCR dashboard.

Data sources (no LLM required for any of these):

yfinance — bank stock prices, return correlations (→ adjacency matrix A),

market cap, balance sheet financials (total assets, equity)

ECB SDW — sovereign bond yields (IT, DE, FR, ES, NL, SE),

EUR/USD rate, ECB deposit facility rate

FRED — TED spread, VIX (systemic stress proxies)

Correlation-based edge weights

As per the SCG proposal (§2.1), edges are defined by the Pearson correlation of bank stock daily returns over a rolling window. This is real, daily data that is:

  • Updated automatically every trading day

  • Directly computable without regulatory disclosures

  • Already used in the SCG literature (Mantegna 1999, Tumminello 2007)

All network fetches are parallelised via ThreadPoolExecutor for speed.

dashboard.data_api.fetch_correlation_adjacency(bank_ids=None, window_days=252, min_corr=0.3, pmfg=False)[source]

Build the correlation-based adjacency matrix from daily bank stock returns.

As per §2.1 of the SCG proposal: edges are Pearson correlations of returns. Weak edges (< min_corr) are removed (threshold filtering, §2.3).

Parameters:
  • bank_ids (list of bank IDs to include (defaults to all 10))

  • window_days (rolling return window in trading days)

  • min_corr (threshold below which edges are zeroed out)

  • pmfg (if True, apply Planar Maximally Filtered Graph (slower but cleaner))

Returns:

dict {source_id

Return type:

{target_id: weight}} — upper-triangular correlation weights

dashboard.data_api.fetch_bank_market_features(bank_ids=None)[source]

Fetch per-bank market and fundamental features from yfinance in parallel.

Returns:

  • dict {bank_id ({feature: value}})

  • Features available – market_cap, total_assets, common_equity, roe, price_to_book, beta, shares_outstanding, latest_price, 1y_return, volatility_30d

Parameters:

bank_ids (List[str] | None)

Return type:

Dict[str, Dict[str, Any]]

dashboard.data_api.fetch_sovereign_spreads()[source]

Fetch 10Y sovereign bond yields from ECB SDW and compute IT-DE spread as a systemic stress proxy.

Returns:

dict {country

Return type:

latest_yield_pct, …, ‘IT_DE_spread’: float, ‘ES_DE_spread’: float}

dashboard.data_api.fetch_system_indicators()[source]

Fetch system-level stress indicators from free public APIs in parallel.

Returns a dict suitable for BankingSystemSimulation.system_indicators:

CISS — derived from IT-DE spread and bank volatility funding_stress — from bank stock volatility index sovereign_stress — IT-DE 10Y spread eurusd — EUR/USD rate

Return type:

Dict[str, float]

dashboard.data_api.fetch_all(bank_ids=None, correlation_window=252, min_corr=0.3)[source]

Full parallel fetch: adjacency matrix + node features + system indicators.

Returns:

adjacency : {src: {tgt: weight}} bank_features : {bank_id: {feature: value}} system : {indicator: value} prices : pd.DataFrame (daily close prices) timestamp : str (UTC ISO)

Return type:

dict with keys

Parameters:
dashboard.data_api.build_simulation_inputs_from_api(bank_ids=None, correlation_window=252, min_corr=0.3)[source]

Build (bank_data, network_data, system_indicators) directly from market APIs.

Same output format as data_loader.load_simulation_inputs — can be used as a drop-in replacement when live data is preferred over the EBA pipeline.

Parameters:
Return type:

Tuple[Dict[str, Any], Dict[str, Dict[str, float]], Dict[str, Any]]

dashboard.data_api.build_daily_graph_snapshots(bank_ids=None, lookback_years=3, corr_window=60, min_corr=0.3, stride=1, progress_callback=None)[source]

Build daily graph snapshots from historical market data for GNN training.

Fetches multi-year daily prices once, then rolls through each trading day constructing:

  • Node features: [N, 5] per bank (volatility, return, log-price, beta_proxy, momentum)

  • Edge index + weight: from rolling correlation of returns

  • Spectral targets: lambda_2, spectral_gap, spectral_radius from the day’s graph

Parameters:
  • lookback_years (int) – How many years of history to fetch (default 3 → ~750 trading days).

  • corr_window (int) – Rolling window for correlation-based adjacency (trading days).

  • min_corr (float) – Threshold for edge inclusion.

  • stride (int) – Step between consecutive snapshots (1 = every day, 5 = weekly).

  • progress_callback (callable(current, total)) – For UI progress updates.

  • bank_ids (List[str] | None)

Return type:

list of snapshot dicts compatible with GNNPredictor.

Prediction

Prediction helpers for the Evolution page.

Generates GNN training data from real daily market snapshots (yfinance), trains the GNNPredictor (temporal GCN+LSTM), and builds SCG-vs-Basel comparison data.

dashboard.prediction.generate_evolution_data(n_steps=500, source='market', corr_window=60, stride=1, progress_callback=None)[source]

Generate graph snapshots for GNN training.

Parameters:
  • n_steps (int) – For ‘abm’ source: number of ABM steps. For ‘market’ source: ignored (uses all available daily data).

  • source (str) – ‘market’ — real daily data from yfinance (default, recommended) ‘abm’ — stochastic ABM simulation

  • corr_window (int) – Rolling correlation window for market data (trading days).

  • stride (int) – Day stride for market snapshots (1=daily, 5=weekly).

  • progress_callback (callable(current, total)) – For UI progress updates.

Return type:

List[Dict[str, Any]]

dashboard.prediction.train_predictor(snapshots, seq_len=10, hidden_dim=64, num_gcn_layers=3, num_lstm_layers=2, epochs=200, lr=0.003, dropout=0.1, progress_callback=None)[source]

Train a GNNPredictor on graph snapshots.

Returns (predictor, train_metrics, test_metrics).

Parameters:
dashboard.prediction.compute_scg_reconstruction_accuracy()[source]

Run the full SCG pipeline on the current adjacency and return reconstruction accuracy.

Return type:

Dict[str, Any]

dashboard.prediction.build_scg_vs_basel_comparison(feature_history)[source]

Build SCG risk score vs Basel + CoVaR + MES per step.

Parameters:

feature_history (List[Dict[str, Any]])

Return type:

Dict[str, List]

GNN Export

GNN dataset exporter for the SCR Financial Networks dashboard.

Builds a graph dataset from the current simulation state + LLM-fetched bank features and writes it to disk in multiple formats:

nodes.csv — node feature matrix (one row per bank) edges.csv — directed edge list with weights graph_data.json — full graph as JSON (PyG-loadable via custom loader) pyg_data.pt — torch_geometric.data.Data object (if PyG installed) metadata.json — feature names, bank labels, dataset provenance

Usage:

from dashboard.gnn_export import build_and_export
info = build_and_export(gnn_features, output_dir="data/gnn_datasets")
dashboard.gnn_export.build_graph_tensors(bank_ids, node_data, edges, feature_cols=None)[source]

Convert node/edge dicts into numpy arrays ready for GNN consumption.

Returns:

  • X (float32 [N, F] — node feature matrix (NaN-imputed with column mean))

  • edge_index (int64 [2, E] — source/target index pairs)

  • edge_attr (float32 [E, 1] — edge weight (normalised 0-1))

  • y (int64 [N, 2] — binary labels [solvent, liquid])

  • feat_names (list[str] — feature column names (matches X columns))

Parameters:
Return type:

Tuple[ndarray, ndarray, ndarray, ndarray, List[str]]

dashboard.gnn_export.build_and_export(gnn_features, sim_graph, output_dir='data/gnn_datasets', tag=None)[source]

Build a GNN dataset from LLM-fetched features + simulation graph and save it to output_dir.

Parameters:
  • gnn_features ({bank_id: {feature: value}}) – Output of fetch_bank_features_for_gnn().

  • sim_graph ({nodes: [...], edges: [...]}) – Output of simulation_state.get_network_graph_data().

  • output_dir (str) – Directory to write dataset files into.

  • tag (str, optional) – Short label for the export (used in filenames). Defaults to a timestamp.

Returns:

dict with keys

Return type:

output_dir, files, n_nodes, n_edges, n_features, timestamp

Simulation State

Global simulation state for the dashboard.

Builds the BankingSystemSimulation from the live data pipeline (EBACollector → ECBCollector → DataPreprocessor) rather than any hard-coded demo values. Thread safety is achieved via a module-level lock.

dashboard.simulation_state.get_data_source()[source]

Return ‘API’ or ‘EBA’ depending on how data was loaded.

Return type:

str

dashboard.simulation_state.get_config()[source]
Return type:

Dict[str, Any]

dashboard.simulation_state.reload_data(start_date=None, end_date=None, bank_list=None, snapshot_date=None)[source]

Re-fetch data from the pipeline with updated parameters.

Parameters:
  • start_date (str | None)

  • end_date (str | None)

  • bank_list (List[str] | None)

  • snapshot_date (str | None)

Return type:

None

dashboard.simulation_state.load_from_data(bank_data, network_data, system_indicators)[source]

Replace simulation state with externally-fetched data (e.g. from data_api).

Parameters:
Return type:

None

dashboard.simulation_state.reset_simulation()[source]

Reset the ABM to the loaded data snapshot without re-fetching.

Return type:

None

dashboard.simulation_state.get_simulation()[source]
Return type:

BankingSystemSimulation

dashboard.simulation_state.run_steps(steps, shocks=None)[source]
Parameters:
Return type:

List[Dict]

dashboard.simulation_state.apply_shock(shock_params)[source]
Parameters:

shock_params (Dict[str, Any])

Return type:

None

dashboard.simulation_state.apply_shock_and_record(shock_params)[source]

Apply shock, record state immediately, return full history.

Parameters:

shock_params (Dict[str, Any])

Return type:

List[Dict]

dashboard.simulation_state.apply_llm_bank_data(bank_data)[source]

Overwrite bank states with data fetched by the LLM.

Parameters:

bank_data (Dict[str, Any])

Return type:

None

dashboard.simulation_state.get_spectral_data()[source]
Return type:

Dict[str, Any]

dashboard.simulation_state.get_network_graph_data()[source]
Return type:

Dict[str, Any]

dashboard.simulation_state.get_coarse_grained_data()[source]

Run spectral coarse-graining on the current adjacency and return results.

Return type:

Dict[str, Any]

REST API

FastAPI backend for the SCR Financial Networks dashboard.

Endpoints

GET /health — liveness probe GET /simulation/state — current bank + system state POST /simulation/run — run N steps POST /simulation/shock — apply a named or custom shock POST /simulation/reset — reset to initial state GET /spectral — full spectral analysis POST /analysis/llm — LLM narrative analysis

Run with:

uvicorn dashboard.api:app --reload --port 8000
class dashboard.api.RunRequest(*, steps=10, shock_scenario=None)[source]

Bases: BaseModel

Parameters:
  • steps (int)

  • shock_scenario (str | None)

steps: int
shock_scenario: str | None
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class dashboard.api.ShockRequest(*, scenario=None, custom_params=None)[source]

Bases: BaseModel

Parameters:
scenario: str | None
custom_params: Dict[str, Any] | None
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class dashboard.api.ReloadRequest(*, start_date='2020-01-01', end_date='2024-12-31', bank_list=None, snapshot_date=None)[source]

Bases: BaseModel

Parameters:
  • start_date (str)

  • end_date (str)

  • bank_list (List[str] | None)

  • snapshot_date (str | None)

start_date: str
end_date: str
bank_list: List[str] | None
snapshot_date: str | None
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class dashboard.api.LLMRequest(*, model=None, api_key=None)[source]

Bases: BaseModel

Parameters:
  • model (str | None)

  • api_key (str | None)

model: str | None
api_key: str | None
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

dashboard.api.health()[source]
Return type:

Dict[str, str]

dashboard.api.get_state()[source]

Return current bank states, system metrics, and network graph data.

Return type:

Dict[str, Any]

dashboard.api.run_simulation(req)[source]

Run the simulation for steps steps.

Parameters:

req (RunRequest)

Return type:

Dict[str, Any]

dashboard.api.apply_shock(req)[source]

Apply a named or custom shock to the simulation.

Parameters:

req (ShockRequest)

Return type:

Dict[str, str]

dashboard.api.reset()[source]
Return type:

Dict[str, str]

dashboard.api.reload_data(req)[source]

Re-fetch data from the pipeline with updated parameters.

Parameters:

req (ReloadRequest)

Return type:

Dict[str, Any]

dashboard.api.get_config()[source]
Return type:

Dict[str, Any]

dashboard.api.get_spectral()[source]

Return full spectral analysis of the current network.

Return type:

Dict[str, Any]

dashboard.api.list_scenarios()[source]

List available shock scenarios.

Return type:

List[Dict[str, str]]

dashboard.api.llm_analysis(req)[source]

Generate a narrative analysis of the current state using Cerebras LLM.

Parameters:

req (LLMRequest)

Return type:

Dict[str, str]