[email protected] commited on
Commit
d45fa12
Β·
1 Parent(s): 51b84b9

πŸ—‘οΈ Remove PostgreSQL database dependencies and related files

Browse files

- Delete docker-compose.yaml (PostgreSQL and pgAdmin services)
- Remove src/preprocess/db.py (database connection file)
- Clean up setup.py: remove psycopg2-binary and SQLAlchemy dependencies
- Remove docker/ directory and initialization files
- Update README.md to reflect simplified structure
- Add Excel processing dependencies (openpyxl, xlrd) to setup.py
- Project now uses CSV files directly without database layer
- Simplified deployment without Docker dependencies

.env.example DELETED
File without changes
README.md CHANGED
@@ -32,7 +32,6 @@ SD_roster_real/
32
  β”œβ”€β”€ main.py # Main entry point
33
  β”œβ”€β”€ README.md
34
  β”œβ”€β”€ requirements.txt
35
- β”œβ”€β”€ docker-compose.yaml
36
  β”œβ”€β”€ setup.py
37
  β”‚
38
  β”œβ”€β”€ src/ # Core business logic
 
32
  β”œβ”€β”€ main.py # Main entry point
33
  β”œβ”€β”€ README.md
34
  β”œβ”€β”€ requirements.txt
 
35
  β”œβ”€β”€ setup.py
36
  β”‚
37
  β”œβ”€β”€ src/ # Core business logic
STREAMLIT_README.md DELETED
@@ -1,161 +0,0 @@
1
- # SD Roster Optimization Tool - Multi-Page Streamlit Interface
2
-
3
- A comprehensive multi-page Streamlit web application for supply chain roster optimization using OR-Tools.
4
-
5
- ## πŸ—οΈ Application Structure
6
-
7
- ### 🏠 Home Page
8
- - **Welcome Dashboard**: Overview and navigation hub
9
- - **Global Settings**: Shared data path and date configuration
10
- - **System Status**: Component availability and health checks
11
- - **Quick Navigation**: Direct links to main functionalities
12
-
13
- ### πŸ“Š Dataset Metadata Page
14
- Comprehensive data analysis across five detailed tabs:
15
-
16
- #### πŸ“‹ Data Overview Tab
17
- - **Key Metrics**: Orders, quantities, products, employees, production lines
18
- - **Data Quality Analysis**: Completeness scores and missing data indicators
19
- - **Data Freshness**: Latest data timestamps and age indicators
20
-
21
- #### πŸ“¦ Demand Analysis Tab
22
- - **Demand Metrics**: Total, average, max, min order sizes
23
- - **Top Products**: Ranking by demand volume with visualizations
24
- - **Daily Patterns**: Trend analysis and demand variability
25
- - **Distribution Analysis**: Order quantity and frequency distributions
26
-
27
- #### πŸ‘₯ Workforce Analysis Tab
28
- - **Employee Metrics**: Total staff, types, distribution
29
- - **Cost Structure**: Hourly rates by employee type and shift
30
- - **Productivity Analysis**: Performance metrics by employee type
31
-
32
- #### 🏭 Production Capacity Tab
33
- - **Line Metrics**: Total lines, types, maximum capacities
34
- - **Capacity Distribution**: Line allocation and utilization potential
35
- - **Theoretical Analysis**: Maximum capacity calculations by shift
36
-
37
- #### πŸ’° Cost Analysis Tab
38
- - **Cost Structure**: Min/max/average hourly rates and ranges
39
- - **Budget Planning**: Minimum and maximum cost scenarios
40
- - **Projections**: Weekly and monthly cost estimates
41
-
42
- ### 🎯 Optimization Page
43
- Advanced optimization interface with comprehensive results:
44
-
45
- #### πŸ“Š Summary Tab
46
- - Total optimization cost and key metrics
47
- - Cost efficiency analysis (cost per day, cost per unit)
48
- - Optimization parameters used
49
-
50
- #### πŸ“ˆ Production Tab
51
- - Production vs. demand comparison by product
52
- - Fulfillment rate analysis with interactive charts
53
- - Production schedule visualization
54
-
55
- #### πŸ‘· Labor Tab
56
- - Labor allocation by employee type and shift
57
- - Required headcount analysis
58
- - Daily and average staffing requirements
59
-
60
- #### πŸ’° Costs Tab
61
- - Detailed cost breakdown by employee type and shift
62
- - Cost distribution visualizations
63
- - Priority mode analysis (when applicable)
64
-
65
- ## Quick Start
66
-
67
- ### 1. Install Dependencies
68
- ```bash
69
- pip install -r requirements.txt
70
- ```
71
-
72
- ### 2. Run the Application
73
- ```bash
74
- # Option 1: Using the runner script
75
- python run_streamlit.py
76
-
77
- # Option 2: Direct streamlit command
78
- streamlit run Home.py
79
- ```
80
-
81
- ### 3. Access the Application
82
- Open your browser to: `http://localhost:8501`
83
-
84
- ## Usage Guide
85
-
86
- ### Navigation Flow
87
- 1. **Start at Home**: Configure global settings and navigate to specific functions
88
- 2. **Explore Metadata**: Analyze your data across the comprehensive metadata tabs
89
- 3. **Run Optimization**: Configure parameters and execute optimization on the dedicated page
90
- 4. **Analyze Results**: Review detailed results across multiple result tabs
91
-
92
- ### Page-by-Page Guide
93
- 1. **Home Page**: Set data paths, select date ranges, check system status
94
- 2. **Dataset Metadata**: Deep dive into demand, workforce, capacity, and cost analysis
95
- 3. **Optimization**: Configure optimization parameters, run optimization, analyze results
96
-
97
- ## Technical Details
98
-
99
- ### Optimization Engine
100
- - Built on Google OR-Tools for mixed-integer programming
101
- - Supports multiple constraint modes for realistic business scenarios
102
- - Handles complex multi-product, multi-shift, multi-line scheduling
103
-
104
- ### Data Sources
105
- The application automatically loads data from:
106
- - `COOIS_Released_Prod_Orders.csv` - Production orders and demand
107
- - Employee data files - Staff availability and costs
108
- - Production line configuration - Line capacities and capabilities
109
-
110
- ### Configuration
111
- Key optimization parameters can be adjusted in `src/config/optimization_config.py`:
112
- - Employee types and costs
113
- - Shift definitions and durations
114
- - Production line capacities
115
- - Constraint modes and business rules
116
-
117
- ## Business Scenarios
118
-
119
- ### Priority Mode (Recommended)
120
- - Uses UNICEF Fixed term staff first
121
- - Engages Humanizer staff only when fixed staff at capacity
122
- - Reflects realistic business operations
123
-
124
- ### Mandatory Mode
125
- - Forces all fixed staff to work full hours
126
- - More expensive but ensures full utilization
127
- - Useful for guaranteed staffing scenarios
128
-
129
- ### Demand-Driven Mode
130
- - Purely cost-optimized scheduling
131
- - No mandatory fixed hours
132
- - Most cost-efficient but may underutilize staff
133
-
134
- ## Troubleshooting
135
-
136
- ### Common Issues
137
- 1. **No Date Ranges Available**: Ensure your data files are in the correct location
138
- 2. **Optimization Fails**: Check that demand data exists for the selected date range
139
- 3. **Import Errors**: Verify all dependencies are installed
140
-
141
- ### Performance Tips
142
- - Smaller date ranges optimize faster
143
- - Reducing product count can improve solve time
144
- - Priority mode typically solves faster than mandatory mode
145
-
146
- ## File Structure
147
- ```
148
- Home.py # Main home page (entry point)
149
- pages/
150
- β”œβ”€β”€ 1_πŸ“Š_Dataset_Metadata.py # Comprehensive data analysis page
151
- └── 2_🎯_Optimization.py # Optimization interface and results
152
- run_streamlit.py # Convenient runner script
153
- src/
154
- β”œβ”€β”€ models/optimizer_real.py # Core optimization engine
155
- β”œβ”€β”€ config/optimization_config.py # Configuration parameters
156
- └── etl/ # Data extraction and transformation
157
- streamlit_app_old.py # Backup of original single-page app
158
- ```
159
-
160
- ## Support
161
- For technical issues or feature requests, refer to the main project documentation or contact the development team.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/real_data_excel/converted_csv/Kit_Composition_and_relation_cleaned_with_line_type_TEST.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b371c129393480a757a7839ba092dfe076037954aeb3f3c9bc962ae83ef8b9f
3
+ size 23800
docker-compose.yaml DELETED
@@ -1,36 +0,0 @@
1
- version: '3.8'
2
- services:
3
- db:
4
- image: postgres:16-alpine
5
- container_name: sd_postgres
6
- restart: unless-stopped
7
- ports:
8
- - "${DB_PORT:-5432}:5432"
9
- environment:
10
- POSTGRES_USER: hjun
11
- POSTGRES_PASSWORD: alsdfjwpoejfkd
12
- POSTGRES_DB: sd_roster_real
13
- volumes:
14
- - db_data:/var/lib/postgresql/data
15
- - ./docker/init:/docker-entrypoint-initdb.d:ro # 초기 μŠ€ν‚€λ§ˆ/κΆŒν•œ 슀크립트
16
- healthcheck:
17
- test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
18
- interval: 5s
19
- timeout: 3s
20
- retries: 20
21
-
22
- pgadmin:
23
- image: dpage/pgadmin4
24
- container_name: sd_pgadmin
25
- restart: unless-stopped
26
- environment:
27
- PGADMIN_DEFAULT_EMAIL: [email protected]
28
- PGADMIN_DEFAULT_PASSWORD: alsdfjwpoejfkd
29
- ports:
30
- - "${PGADMIN_PORT:-5050}:80"
31
- depends_on:
32
- db:
33
- condition: service_healthy
34
-
35
- volumes:
36
- db_data:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
requirements-viz.txt DELETED
@@ -1,7 +0,0 @@
1
- # Optional visualization dependencies for enhanced hierarchy dashboard
2
- # Install with: pip install -r requirements-viz.txt
3
-
4
- networkx>=2.8.0 # For dependency network graphs
5
- plotly>=5.0.0 # For interactive charts (should already be installed)
6
- pandas>=1.3.0 # For data processing (should already be installed)
7
- numpy>=1.20.0 # For numerical operations (should already be installed)
 
 
 
 
 
 
 
 
requirements.txt CHANGED
@@ -6,3 +6,7 @@ scipy>=1.9.0
6
  ortools>=9.0.0
7
  openpyxl>=3.0.0
8
  xlrd>=2.0.0
 
 
 
 
 
6
  ortools>=9.0.0
7
  openpyxl>=3.0.0
8
  xlrd>=2.0.0
9
+ # Optional visualization dependencies for enhanced hierarchy dashboard
10
+ # Install with: pip install -r requirements-viz.txt
11
+
12
+ networkx>=2.8.0 # For dependency network graphs
setup.py CHANGED
@@ -9,22 +9,21 @@ setup(
9
  packages=find_packages(),
10
  install_requires=[
11
  "absl-py>=2.3.1",
12
- "dotenv>=0.9.9",
13
  "immutabledict>=4.2.1",
14
  "numpy>=2.2.0",
15
  "ortools>=9.14.0",
16
  "pandas>=2.3.0",
17
  "plotly>=5.24.0",
18
  "protobuf>=3.20,<6",
19
- "psycopg2-binary>=2.9.9",
20
  "python-dateutil>=2.9.0",
21
  "python-dotenv>=1.0.0",
22
  "pytz>=2025.2",
23
  "six>=1.17.0",
24
- "SQLAlchemy>=2.0.36",
25
  "streamlit>=1.39.0",
26
  "typing_extensions>=4.14.0",
27
  "tzdata>=2025.2",
 
 
28
  ],
29
  python_requires=">=3.10,<3.11",
30
  )
 
9
  packages=find_packages(),
10
  install_requires=[
11
  "absl-py>=2.3.1",
 
12
  "immutabledict>=4.2.1",
13
  "numpy>=2.2.0",
14
  "ortools>=9.14.0",
15
  "pandas>=2.3.0",
16
  "plotly>=5.24.0",
17
  "protobuf>=3.20,<6",
 
18
  "python-dateutil>=2.9.0",
19
  "python-dotenv>=1.0.0",
20
  "pytz>=2025.2",
21
  "six>=1.17.0",
 
22
  "streamlit>=1.39.0",
23
  "typing_extensions>=4.14.0",
24
  "tzdata>=2025.2",
25
+ "openpyxl>=3.0.0",
26
+ "xlrd>=2.0.0",
27
  ],
28
  python_requires=">=3.10,<3.11",
29
  )
src/preprocess/db.py DELETED
@@ -1,17 +0,0 @@
1
- from sqlalchemy import create_engine
2
- from dotenv import load_dotenv
3
- import os
4
-
5
- load_dotenv()
6
- USER = os.getenv("POSTGRES_USER", "myuser")
7
- PWD = os.getenv("POSTGRES_PASSWORD", "mypass")
8
- DB = os.getenv("POSTGRES_DB", "mydb")
9
- PORT = os.getenv("DB_PORT", "5432")
10
- HOST = "localhost"
11
-
12
- engine = create_engine(
13
- f"postgresql+psycopg2://{USER}:{PWD}@{HOST}:{PORT}/{DB}", future=True
14
- )
15
- if __name__ == "__main__":
16
- with engine.begin() as conn:
17
- print(conn.execute("select version();").scalar())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
test_hierarchy_viz.py DELETED
@@ -1,167 +0,0 @@
1
- #!/usr/bin/env python3
2
- """
3
- Test script for hierarchy visualization
4
- Run this to see a demo of the hierarchy dashboard
5
- """
6
-
7
- import sys
8
- import os
9
- sys.path.append('src')
10
-
11
- # Sample data for testing
12
- def create_sample_results():
13
- """Create sample optimization results for testing"""
14
-
15
- # Sample schedule data showing hierarchy flow
16
- sample_schedule = [
17
- # Day 1: Prepacks first
18
- {'day': 1, 'line_type_id': 6, 'line_idx': 1, 'shift': 1, 'product': 'PREPACK_A', 'run_hours': 4.0, 'units': 200},
19
- {'day': 1, 'line_type_id': 7, 'line_idx': 1, 'shift': 1, 'product': 'PREPACK_B', 'run_hours': 3.5, 'units': 150},
20
-
21
- # Day 2: Subkits using prepacks
22
- {'day': 2, 'line_type_id': 6, 'line_idx': 1, 'shift': 1, 'product': 'SUBKIT_X', 'run_hours': 5.0, 'units': 100},
23
- {'day': 2, 'line_type_id': 7, 'line_idx': 2, 'shift': 2, 'product': 'SUBKIT_Y', 'run_hours': 4.5, 'units': 80},
24
-
25
- # Day 3: Master kits using subkits
26
- {'day': 3, 'line_type_id': 6, 'line_idx': 2, 'shift': 1, 'product': 'MASTER_FINAL', 'run_hours': 6.0, 'units': 50},
27
- {'day': 3, 'line_type_id': 7, 'line_idx': 1, 'shift': 3, 'product': 'MASTER_DELUXE', 'run_hours': 5.5, 'units': 40},
28
- ]
29
-
30
- # Sample hierarchy data
31
- sample_kit_levels = {
32
- 'PREPACK_A': 0, # Prepack
33
- 'PREPACK_B': 0, # Prepack
34
- 'SUBKIT_X': 1, # Subkit
35
- 'SUBKIT_Y': 1, # Subkit
36
- 'MASTER_FINAL': 2, # Master
37
- 'MASTER_DELUXE': 2, # Master
38
- }
39
-
40
- # Sample dependencies
41
- sample_dependencies = {
42
- 'SUBKIT_X': ['PREPACK_A'],
43
- 'SUBKIT_Y': ['PREPACK_B'],
44
- 'MASTER_FINAL': ['SUBKIT_X', 'PREPACK_A'],
45
- 'MASTER_DELUXE': ['SUBKIT_Y', 'PREPACK_B'],
46
- }
47
-
48
- # Sample production totals
49
- weekly_production = {
50
- 'PREPACK_A': 200,
51
- 'PREPACK_B': 150,
52
- 'SUBKIT_X': 100,
53
- 'SUBKIT_Y': 80,
54
- 'MASTER_FINAL': 50,
55
- 'MASTER_DELUXE': 40,
56
- }
57
-
58
- # Sample workforce data
59
- person_hours_by_day = [
60
- {'day': 1, 'emp_type': 'UNICEF Fixed term', 'used_person_hours': 16, 'cap_person_hours': 64},
61
- {'day': 1, 'emp_type': 'Humanizer', 'used_person_hours': 40, 'cap_person_hours': 80},
62
- {'day': 2, 'emp_type': 'UNICEF Fixed term', 'used_person_hours': 20, 'cap_person_hours': 64},
63
- {'day': 2, 'emp_type': 'Humanizer', 'used_person_hours': 45, 'cap_person_hours': 80},
64
- {'day': 3, 'emp_type': 'UNICEF Fixed term', 'used_person_hours': 18, 'cap_person_hours': 64},
65
- {'day': 3, 'emp_type': 'Humanizer', 'used_person_hours': 42, 'cap_person_hours': 80},
66
- ]
67
-
68
- return {
69
- 'objective': 12500.75, # Total cost
70
- 'run_schedule': sample_schedule,
71
- 'weekly_production': weekly_production,
72
- 'person_hours_by_day': person_hours_by_day,
73
- 'kit_levels': sample_kit_levels,
74
- 'kit_dependencies': sample_dependencies
75
- }
76
-
77
- def test_hierarchy_flow():
78
- """Test the hierarchy flow visualization components"""
79
- print("πŸ§ͺ Testing Hierarchy Visualization Components")
80
- print("=" * 50)
81
-
82
- # Create sample data
83
- results = create_sample_results()
84
- print(f"βœ… Created sample results with {len(results['run_schedule'])} production runs")
85
-
86
- try:
87
- # Test imports
88
- from src.visualization.hierarchy_dashboard import (
89
- prepare_hierarchy_flow_data,
90
- prepare_hierarchy_analytics_data,
91
- calculate_hierarchy_line_utilization,
92
- get_hierarchy_level_summary
93
- )
94
- print("βœ… Successfully imported hierarchy dashboard functions")
95
-
96
- # Test flow data preparation
97
- flow_data = prepare_hierarchy_flow_data(results)
98
- print(f"βœ… Prepared flow data: {len(flow_data)} flow records")
99
-
100
- # Test analytics data
101
- analytics = prepare_hierarchy_analytics_data(results)
102
- print(f"βœ… Prepared analytics data: {analytics['dependency_violations']} violations detected")
103
-
104
- # Test line utilization calculation
105
- line_util = calculate_hierarchy_line_utilization(results)
106
- print(f"βœ… Calculated line utilization for {len(line_util)} lines")
107
-
108
- # Test hierarchy summary
109
- summary = get_hierarchy_level_summary(flow_data)
110
- print("βœ… Generated hierarchy level summary:")
111
- for level, data in summary.items():
112
- print(f" - {level.title()}: {data['count']} products, {data['total_units']} units")
113
-
114
- print("\nπŸŽ‰ All hierarchy visualization components working correctly!")
115
- print("\nTo see the full visualization:")
116
- print("1. Run your Streamlit app: streamlit run app.py")
117
- print("2. Go to Settings page and run optimization")
118
- print("3. Check the 'πŸ”„ Hierarchy Flow' tab in results")
119
-
120
- return True
121
-
122
- except Exception as e:
123
- print(f"❌ Error testing hierarchy visualization: {e}")
124
- import traceback
125
- traceback.print_exc()
126
- return False
127
-
128
- def display_sample_hierarchy_info():
129
- """Display information about the sample hierarchy"""
130
- print("\nπŸ“Š Sample Hierarchy Structure:")
131
- print("=" * 30)
132
-
133
- print("🟒 PREPACKS (Level 0):")
134
- print(" - PREPACK_A: Basic components")
135
- print(" - PREPACK_B: Basic components")
136
-
137
- print("\n🟑 SUBKITS (Level 1):")
138
- print(" - SUBKIT_X: Uses PREPACK_A")
139
- print(" - SUBKIT_Y: Uses PREPACK_B")
140
-
141
- print("\nπŸ”΄ MASTERS (Level 2):")
142
- print(" - MASTER_FINAL: Uses SUBKIT_X + PREPACK_A")
143
- print(" - MASTER_DELUXE: Uses SUBKIT_Y + PREPACK_B")
144
-
145
- print("\nπŸ“… Production Flow:")
146
- print(" Day 1: Produce prepacks first (dependencies)")
147
- print(" Day 2: Produce subkits (using prepacks)")
148
- print(" Day 3: Produce masters (using subkits)")
149
-
150
- print("\nThis demonstrates the optimal hierarchy flow!")
151
-
152
- if __name__ == "__main__":
153
- print("πŸ”„ Hierarchy Visualization Test")
154
- print("=" * 40)
155
-
156
- # Display sample info
157
- display_sample_hierarchy_info()
158
-
159
- # Test the components
160
- success = test_hierarchy_flow()
161
-
162
- if success:
163
- print(f"\nβœ… Test completed successfully!")
164
- else:
165
- print(f"\n❌ Test failed - check error messages above")
166
-
167
- print("\n" + "=" * 40)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
test_kit_relationships.py DELETED
@@ -1,162 +0,0 @@
1
- #!/usr/bin/env python3
2
- """
3
- Test script for kit relationships visualization
4
- Tests the actual kit dependency relationships from kit_hierarchy.json
5
- """
6
-
7
- import sys
8
- import os
9
- sys.path.append('src')
10
-
11
- def test_kit_relationships():
12
- """Test kit relationships visualization"""
13
- print("πŸ”— Testing Kit Relationships Visualization")
14
- print("=" * 50)
15
-
16
- try:
17
- # Test importing the kit relationships module
18
- from src.visualization.kit_relationships import (
19
- load_kit_hierarchy,
20
- build_relationship_data,
21
- get_production_timing,
22
- find_dependency_violations
23
- )
24
- print("βœ… Successfully imported kit relationships module")
25
-
26
- # Test loading hierarchy data
27
- hierarchy_data = load_kit_hierarchy()
28
- if hierarchy_data:
29
- print(f"βœ… Loaded kit hierarchy: {len(hierarchy_data)} master kits")
30
-
31
- # Show some example relationships
32
- print("\nπŸ“‹ Sample Kit Relationships:")
33
- count = 0
34
- for kit_id, kit_info in hierarchy_data.items():
35
- if kit_info.get('dependencies') and count < 5:
36
- deps = kit_info['dependencies']
37
- kit_name = kit_info.get('name', kit_id)[:50] + "..." if len(kit_info.get('name', '')) > 50 else kit_info.get('name', kit_id)
38
- print(f" β€’ {kit_id} ({kit_name})")
39
- print(f" Depends on: {deps}")
40
- count += 1
41
-
42
- # Test with sample production data
43
- sample_produced_kits = set(list(hierarchy_data.keys())[:10]) # First 10 kits
44
- print(f"\nπŸ§ͺ Testing with {len(sample_produced_kits)} sample produced kits")
45
-
46
- relationships = build_relationship_data(hierarchy_data, sample_produced_kits)
47
- print(f"βœ… Found {len(relationships)} dependency relationships")
48
-
49
- if relationships:
50
- print("\nπŸ”— Sample Relationships:")
51
- for i, rel in enumerate(relationships[:5]):
52
- print(f" {i+1}. {rel['source']} β†’ {rel['target']} ({rel['source_type']} β†’ {rel['target_type']})")
53
-
54
- # Test production timing analysis
55
- sample_timing = {kit: i % 5 + 1 for i, kit in enumerate(sample_produced_kits)} # Random days 1-5
56
- violations = find_dependency_violations(sample_timing, relationships)
57
- print(f"βœ… Dependency analysis: {len(violations)} violations found")
58
-
59
- print("\nπŸŽ‰ Kit relationships visualization components working!")
60
- return True
61
-
62
- else:
63
- print("⚠️ No kit hierarchy data found - please check kit_hierarchy.json")
64
- return False
65
-
66
- except FileNotFoundError:
67
- print("❌ Kit hierarchy file not found at data/hierarchy_exports/kit_hierarchy.json")
68
- return False
69
- except Exception as e:
70
- print(f"❌ Error testing kit relationships: {e}")
71
- import traceback
72
- traceback.print_exc()
73
- return False
74
-
75
- def display_hierarchy_structure():
76
- """Display the structure of the hierarchy data"""
77
- print("\nπŸ“Š Kit Hierarchy Structure Analysis")
78
- print("=" * 40)
79
-
80
- try:
81
- from src.visualization.kit_relationships import load_kit_hierarchy
82
- hierarchy_data = load_kit_hierarchy()
83
-
84
- if not hierarchy_data:
85
- print("No hierarchy data available")
86
- return
87
-
88
- # Analyze hierarchy structure
89
- masters = []
90
- subkits = []
91
- prepacks = []
92
-
93
- total_dependencies = 0
94
-
95
- for kit_id, kit_info in hierarchy_data.items():
96
- kit_type = kit_info.get('type', 'unknown')
97
- dependencies = kit_info.get('dependencies', [])
98
- total_dependencies += len(dependencies)
99
-
100
- if kit_type == 'master':
101
- masters.append(kit_id)
102
- elif kit_type == 'subkit':
103
- subkits.append(kit_id)
104
- elif kit_type == 'prepack':
105
- prepacks.append(kit_id)
106
-
107
- print(f"πŸ“¦ Total Kits: {len(hierarchy_data)}")
108
- print(f" β€’ Masters: {len(masters)}")
109
- print(f" β€’ Subkits: {len(subkits)}")
110
- print(f" β€’ Prepacks: {len(prepacks)}")
111
- print(f"πŸ”— Total Dependencies: {total_dependencies}")
112
-
113
- # Find most complex kit (most dependencies)
114
- max_deps = 0
115
- most_complex = None
116
-
117
- for kit_id, kit_info in hierarchy_data.items():
118
- deps = len(kit_info.get('dependencies', []))
119
- if deps > max_deps:
120
- max_deps = deps
121
- most_complex = kit_id
122
-
123
- if most_complex:
124
- print(f"πŸ† Most Complex Kit: {most_complex} ({max_deps} dependencies)")
125
-
126
- # Show dependency chains
127
- print(f"\nπŸ”„ Sample Dependency Chains:")
128
- chains_shown = 0
129
- for kit_id, kit_info in hierarchy_data.items():
130
- if kit_info.get('dependencies') and chains_shown < 3:
131
- deps = kit_info['dependencies']
132
- kit_name = kit_info.get('name', kit_id)[:40] + "..." if len(kit_info.get('name', '')) > 40 else kit_info.get('name', kit_id)
133
- print(f" Chain {chains_shown + 1}: {kit_name}")
134
- for dep in deps[:3]: # Show first 3 dependencies
135
- dep_info = hierarchy_data.get(dep, {})
136
- dep_name = dep_info.get('name', dep)[:30] + "..." if len(dep_info.get('name', '')) > 30 else dep_info.get('name', dep)
137
- print(f" ↳ Needs: {dep_name}")
138
- chains_shown += 1
139
-
140
- print(f"\nThis data will be visualized in the dashboard! 🎨")
141
-
142
- except Exception as e:
143
- print(f"Error analyzing hierarchy: {e}")
144
-
145
- if __name__ == "__main__":
146
- # Display hierarchy structure
147
- display_hierarchy_structure()
148
-
149
- # Test kit relationships
150
- success = test_kit_relationships()
151
-
152
- if success:
153
- print(f"\nβœ… Kit relationships test completed successfully!")
154
- print(f"\nTo see the visualization:")
155
- print(f"1. Run: streamlit run app.py")
156
- print(f"2. Go to Settings β†’ Run Optimization")
157
- print(f"3. Check 'Hierarchy Flow' β†’ 'Kit Relationships' tab")
158
- print(f"4. See the interactive network graph! πŸ•ΈοΈ")
159
- else:
160
- print(f"\n❌ Test failed - check error messages above")
161
-
162
- print(f"\n" + "=" * 50)