work-tracing/workflow_state.md
2025-05-16 17:55:30 +04:00

26 KiB

Workflow State & Rules (STM + Rules + Log)

This file contains the dynamic state, embedded rules, active plan, and log for the current session. It is read and updated frequently by the AI during its operational loop.


State

Holds the current status of the workflow.

Phase: IDLE
Status: TASK_COMPLETED
CurrentTaskID: ChangeIdleThreshold
CurrentStep: ""
CurrentItem: ""

Plan

Contains the step-by-step implementation plan generated during the BLUEPRINT phase.

Task: ChangeIdleThreshold Change the idle threshold from 1 to 10 minutes across the application to determine when a user is considered inactive.

  • [✓] Step CIT-1: Update client-side PowerShell script

    • [✓] Modify client_tools/report.ps1:
    • [✓] Locate the variable $IdleThresholdMinutes and change its default value from 1 to 10
    • [✓] Update any related comments to reflect the new threshold
  • [✓] Step CIT-2: Update client-side configuration documentation

    • [✓] Modify README.md:
    • [✓] Update the example config.envunder "Client-Side Setup" to showIDLE_THRESHOLD_MINUTES="10"`
    • [✓] Update any related documentation about idle detection to reference 10 minutes instead of 5
  • [✓] Step CIT-3: Update server-side code references

    • [✓] Check for any server-side references to the idle threshold value in the Flask application
    • [✓] Updated the auto_timeout_seconds in app/api/reports.py from 6 minutes to 10 minutes
  • [✓] Step CIT-4: Update project configuration documentation

    • [✓] Modify project_config.md:
    • [✓] Update "Idle Detection Logic" to reference 10 minutes instead of 5 minutes
  • [✓] Step CIT-5: Verify configuration consistency

    • [✓] Ensure all references to the idle threshold are consistently set to 10 minutes across the application
    • [✓] Updated app.py fetch_user_activity function to use 10-minute idle threshold
    • [✓] Updated comments in report.ps1 about periodic reporting

Task: ChangeRealWorkHoursToProductionLogic Change the test logic in the real work hours tracking system to production logic, where every consecutive 40 minutes (not 2 minutes) counts as 1 hour (not 3 hours).

  • Step CRWH-1: Update the work_hours_service.py file

    • Modify app/services/work_hours_service.py:
    • Change if consecutive_working_minutes == 2: to if consecutive_working_minutes == 40:
    • Change logged_hours = 3 to logged_hours = 1
    • Remove "TEST LOGIC:" prefix from log message at line 158
    • Remove "TEST LOGIC" reference in the final completion log message at line 260
  • Step CRWH-2: Update the database schema

    • Modify database_utils/001_create_user_real_work_summary.sql:
    • Add the last_event_completed_block BOOLEAN NOT NULL DEFAULT FALSE column to the table definition
  • Step CRWH-3: Verify changes

    • Verify that all code changes are consistent
    • Make sure all references to 40-minute blocks and real work hours are accurate

Task: ImplementRealWorkHoursTracking Create user_real_work_summary table. Implement an automated, scheduled task within Flask (using a library like APScheduler) to populate this table with 'real work hours' (40 consecutive 'working' minutes = 1 hour), with an optional CLI command for manual triggering/testing. Create a new API endpoint to serve this data. Update the frontend dashboard to display these 'real work hours' instead of the previous duration calculation.

  • Step IRWHT-1: Database Setup
    • Create a new SQL file (e.g., database_utils/001_create_user_real_work_summary.sql) with the CREATE TABLE IF NOT EXISTS user_real_work_summary (id SERIAL PRIMARY KEY, username VARCHAR(255) NOT NULL, work_date DATE NOT NULL, real_hours_counted INTEGER NOT NULL DEFAULT 0, last_processed_event_id INTEGER, CONSTRAINT uq_user_work_date UNIQUE (username, work_date)); statement.
    • Update app/models.py to add a new SQLAlchemy model class UserRealWorkSummary mapping to this table.
    • Update README.md to include instructions for applying this new SQL script. Determine if this script should be incorporated into the flask init-db command or remain a manual step.
  • Step IRWHT-2 (Revised - In-App Scheduler): Implement Automated Work Hour Calculation Task
    • Add APScheduler to requirements.txt.
    • Create a new Python module for the calculation logic (e.g., app/services/work_hours_service.py or app/tasks/calculator.py).
    • Implement the core function, calculate_and_store_real_work_hours(), in this module. This function will:
      • Fetch unprocessed WorkEvent records (efficiently using last_processed_event_id from UserRealWorkSummary for each user to avoid reprocessing all events).
      • Iterate through these events on a per-user basis, ordered by timestamp.
      • Carefully track sequences of 40 consecutive 'working' state events (where each event represents approximately 1 minute of work).
      • Perform UPSERT (insert or update) operations on the UserRealWorkSummary table to increment real_hours_counted and update last_processed_event_id for the corresponding username and work_date.
      • Ensure the logic correctly handles the work_date derivation from event timestamps and manages last_processed_event_id for robust incremental processing.
    • Integrate and configure APScheduler within the Flask application (e.g., in app/__init__.py or a dedicated app/scheduler.py module that is initialized by the app factory).
    • Define a scheduled job within APScheduler to call the calculate_and_store_real_work_hours() function periodically (e.g., every 10-15 minutes. The exact interval to be determined based on desired freshness vs. server load).
    • Create a Flask CLI command (e.g., in app/cli.py, named flask process-real-hours) that directly calls the calculate_and_store_real_work_hours() function. This command is for manual execution (testing, initial backfilling, ad-hoc runs).
  • Step IRWHT-3 (Revised): Create New Backend API Endpoint for Real Work Hours
    • In app/api/reports.py (or a new relevant API file if preferred, e.g., app/api/real_time_reports.py), define a new API endpoint (e.g., /api/reports/real_work_hours).
    • This endpoint should accept parameters such as username (optional filter), start_date, and end_date (or a period parameter like 'today', 'current_week', 'current_month' for predefined ranges).
    • The endpoint's logic will query the user_real_work_summary table (using the UserRealWorkSummary model) to fetch real_hours_counted for the specified users and timeframes.
    • It should return the data in a JSON format suitable for frontend consumption (e.g., a list of dictionaries: [{'user': 'name', 'date': 'YYYY-MM-DD', 'real_hours_counted': X}, ...] or aggregated as per the period requested).
  • Step IRWHT-4 (Revised): Update Frontend to Display Real Work Hours
    • In static/js/dashboard.js (or tableManager.js or other relevant JavaScript modules):
      • Modify the existing JavaScript functions responsible for fetching and displaying what was previously work duration.
      • These functions should now be updated to call the new /api/reports/real_work_hours endpoint.
      • The data parsing logic within these JavaScript functions must be updated to correctly use the real_hours_counted field from the new endpoint's JSON response.
    • In templates/dashboard.html:
      • Update table headers (e.g., from "Duration (Hours)" to "Real Work Hours" or "Focused Work Blocks").
      • Update any other descriptive text on the dashboard if necessary to accurately reflect that the new "real work hours" metric is being displayed.
      • Ensure the table columns correctly bind to and display the new data structure provided by the updated JavaScript logic.
  • Step IRWHT-5: Documentation Updates
    • Update README.md to:
      • Add APScheduler to the list of dependencies in requirements.txt section.
      • Describe the new user_real_work_summary table, its columns, and its specific purpose in tracking 40-minute focused work blocks.
      • Document the automated scheduled task (managed by APScheduler) for processing real work hours, mentioning its approximate frequency and its role in keeping user_real_work_summary up-to-date.
      • Document the Flask CLI command (flask process-real-hours) for manual/testing/backfilling.
      • Document the new /api/reports/real_work_hours API endpoint, including its accepted parameters (e.g., username, start_date, end_date, period) and the structure of its JSON response.
      • Update any frontend/dashboard usage descriptions or screenshots to reflect the change in how work hours are presented (now showing "real work hours").
  • Step IRWHT-6: Testing and Verification
    • Thoroughly test the calculate_and_store_real_work_hours() function directly and via the CLI command (flask process-real-hours). Prepare diverse sample work_events data.
    • Verify that the user_real_work_summary table is populated accurately.
    • Test incremental processing using last_processed_event_id.
    • Verify APScheduler is correctly configured and the scheduled job triggers the calculation function as expected (e.g., by checking logs or observing table updates in a development environment with a short interval).
    • Test the new /api/reports/real_work_hours API endpoint.
    • Verify frontend dashboard updates.
Phase: CONSTRUCT
Status: AWAITING_USER_VALIDATION
CurrentTaskID: ImplementRealWorkHoursTracking
CurrentStep: IRWHT-6
CurrentItem: "User Testing and Verification"

Task: OrganizeProjectRootAndSimplifyStartup Clean up the project's root directory by removing unused scripts, reorganizing existing files into appropriate subdirectories, and providing a single, enhanced script for starting the application.

  • [✓] Step OPRS-1: Analyze Root Directory Files and Plan Relocations/Deletions
    • Identify files in the root directory (e.g., check_db.py, config.env, create_db.sql, ecosystem.config.js, fix_task.cmd, report.ps1, run_hidden.vbs, schedule_task.ps1, start_app.sh).
    • Determine which files to keep in root, move to new/existing subdirectories, or delete.
    • Specifically:
      • Keep in root: config.env, ecosystem.config.js, README.md, requirements.txt, run.py, project_config.md, workflow_state.md, .gitignore, .cursorignore.
      • Create client_tools/ directory. Move report.ps1, schedule_task.ps1, run_hidden.vbs into client_tools/.
      • Create database_utils/ directory. Move create_db.sql into database_utils/.
      • Mark check_db.py and fix_task.cmd for deletion after a brief content review to confirm redundancy/obsolescence.
  • [✓] Step OPRS-2: Review Content of Potentially Unused Scripts
    • Read check_db.py to confirm its functionality is covered by flask init-db or app's auto-init.
    • Read fix_task.cmd to understand its purpose and determine if it's still needed.
  • [✓] Step OPRS-3: Implement File Deletions
    • [✓] Delete check_db.py if confirmed redundant.
    • [✓] Delete fix_task.cmd if confirmed obsolete or unused.
  • [✓] Step OPRS-4: Implement File Relocations
    • [✓] Create the directory client_tools.
    • [✓] Move report.ps1toclient_tools/report.ps1.
    • [✓] Move schedule_task.ps1toclient_tools/schedule_task.ps1.
    • [✓] Move run_hidden.vbstoclient_tools/run_hidden.vbs.
    • [✓] Create the directory database_utils.
    • [✓] Move create_db.sqltodatabase_utils/create_db.sql.
  • [✓] Step OPRS-5: Enhance start_app.sh as the Single Startup Script
    • [✓] Read the current start_app.sh.
    • [✓] Modify start_app.sh` to:
      • [✓] Check for and activate the virtual environment (venv).
      • [✓] Prompt the user to choose between 'development' or 'production' mode (or detect via an argument).
      • [✓] If 'development', run python run.py.
      • [✓] If 'production', run gunicorn -w 4 -b 0.0.0.0:5000 "app:create_app()".
      • [✓] Include basic error checking (e.g., venv not found, run.py not found).
  • [✓] Step OPRS-6: Update README.md``
    • [✓] Modify the "Project Structure" section to reflect the new client_tools/anddatabase_utils/ directories and the removal/relocation of files.
    • [✓] Update "Installation Instructions" and "Usage Examples" to refer to start_app.sh as the primary way to run the server.
    • [✓] Ensure client-side setup instructions correctly point to files now in client_tools/`.
  • [✓] Step OPRS-7: Update .gitignore``
    • [✓] Ensure client_tools/anddatabase_utils/are tracked (i.e., not in.gitignore).
    • [✓] Verify instance/, venv/, pycache/, *.pyc remain ignored.
  • [✓] Step OPRS-8: Final Review and State Update
    • [✓] Review all changes for consistency and correctness.
    • [✓] Update workflow_state.mdtoPhase: IDLE, Status: TASK_COMPLETEDforOrganizeProjectRootAndSimplifyStartup`.
  • [✓] Step CAPT5-6: Check and update project_config.md if it specifies the port. (No hardcoded port found, no change needed).
  • [✓] Step CAPT5-7: Ask user to try running ./start_app.sh again and confirm functionality on port 5050.

Task: DiagnoseAndFixRuntimeModuleError Diagnose and resolve the ModuleNotFoundError: No module named 'dotenv' when running start_app.sh.

  • [✓] Step DFRME-1: Log the error and identify the likely cause as missing dependencies in the virtual environment.
  • [✓] Step DFRME-2: Instruct the user to activate the virtual environment and run pip install -r requirements.txt to ensure all dependencies are correctly installed.
  • [✓] Step DFRME-3: User confirmed re-running ./start_app.sh did not resolve the issue.
  • [✓] Step DFRME-4: Verify python-dotenvis inrequirements.txt.
    • [✓] Read requirements.txt. Confirm python-dotenv` is listed.
  • [✓] Step DFRME-5: Modify start_app.sh for more diagnostics.
    • [✓] Inside the "development mode" block, before python run.pyorpython3 run.py`:
      • [✓] Add echo "Which python: $(which python)"`
      • [✓] Add echo "Which python3: $(which python3)"`
      • [✓] Add echo "PYTHONPATH: $PYTHONPATH"`
      • [✓] Add echo "Active Python version: $(python --version || python3 --version)"`
      • [✓] Add echo "Attempting to import dotenv directly via command line..."`
      • [✓] Add python -c "import dotenv; print('dotenv path:', dotenv.file)" || python3 -c "import dotenv; print('dotenv path:', dotenv.file)"`
  • [✓] Step DFRME-6: User re-ran ./start_app.sh and provided the new diagnostic output.
  • [✓] Step DFRME-7: Analyze diagnostic output and propose further steps.
    • [✓] Output shows venv Python is used but still can't import dotenv.
    • [✓] User explicitly used venv/bin/pip install -r requirements.txt.
    • [✓] User ran venv/bin/pip show python-dotenv and provided output.
    • [✓] User tried ./start_app.sh again, issue persists.
    • [✓] Identified Python version mismatch: venv is Python 3.12, but pip installed to a python3.11 site-packages directory within venv.
  • [✓] Step DFRME-8: Recreate virtual environment with consistent Python version.
    • [✓] Instruct user to deactivate and delete the current venv directory.
    • [✓] Instruct user to recreate venv using python3.11 -m venv venv(user specifiedpython3.11).
    • [✓] Instruct user to activate the new venv.
    • [✓] Instruct user to run pip install -r requirements.txt inside the new venv.
  • [✓] Step DFRME-9: User re-ran ./start_app.sh; dotenverror resolved, newImportError appeared.

Task: FixImportErrorInUtils Resolve the ImportError: cannot import name 'filter_sql_by_user' from 'app.utils.queries'.

  • [✓] Step FIEIU-1: Log the new ImportError.
  • [✓] Step FIEIU-2: Read app/utils/queries.pyto check for the definition offilter_sql_by_user. (Confirmed it was removed).
  • [✓] Step FIEIU-3: Read app/utils/init.py to examine the import statement.
  • [✓] Step FIEIU-4: Based on findings, either correct the function name in queries.pyorinit.py, or add the function definition if missing, or adjust the import statement.
    • [✓] Confirmed filter_sql_by_user is obsolete.
    • [✓] Remove filter_sql_by_userfrom the import statement inapp/utils/init.py.
    • [✓] Remove filter_sql_by_userfrom thealllist inapp/utils/init.py.
  • [✓] Step FIEIU-5: User ran ./start_app.sh; ImportError resolved, but port 5000 is in use.

Task: SyncFrontendWithBackendLogic Adjust frontend JavaScript to align with backend data availability and reporting frequency, primarily by tuning auto-refresh intervals and verifying state display logic.

  • [✓] Step SFBL-1: Analyze static/js/autoRefresh.js. Determine the current refresh intervals used for fetching user states (via userStates.jsand/api/user_states) and main report data (e.g., daily/weekly/monthly views).
  • [✓] Step SFBL-2: Modify static/js/autoRefresh.jsto set the primary refresh interval to 60 seconds (60000 milliseconds). This interval should apply to both user state updates and the main report data tables. This change aligns the frontend refresh rate with the 1-minute reporting interval ofreport.ps1, ensuring the dashboard displays near real-time information. Add comments to the code explaining this alignment.
  • [✓] Step SFBL-3: Analyze static/js/userStates.js. Verify that it correctly interprets the status provided by the /api/user_states endpoint and updates the UI accordingly. Confirm there are no frontend-specific assumptions about state transitions or timings that might conflict with the more frequent and reliable data now coming from the client. (This is primarily a check; fixes will only be applied if clear discrepancies or outdated logic are found).
  • [✓] Step SFBL-4: Analyze static/js/tableManager.js. Confirm that it directly displays aggregated data (like duration_hours, first_login_time) as provided by the backend API endpoints, without performing additional calculations or interpretations that could lead to discrepancies. (This is primarily a check; fixes will only be applied if clear discrepancies or outdated logic are found).

Task: EnsureLogoffReport Modify report.ps1 to reliably send a "stopped" status update when the script terminates, such as during user logoff.

  • [✓] Step ELR-1: Analyze report.ps1. Locate the main monitoring loop (while ($true)) and its existing try...catch structure.
  • [✓] Step ELR-2: Add a finallyblock to thetry...catch structure that encompasses the main loop.
  • [✓] Step ELR-3: Inside the new finally block:
    • [✓] Add a log message indicating the script is exiting (e.g., "Script ending (e.g., user logoff, task stop, or after error). Attempting to report 'stopped' state.").
    • [✓] Unconditionally call Send-StateReport -State "stopped".
    • [✓] Add a log message confirming the final report attempt (e.g., "Final 'stopped' state report attempt made.").
    • [✓] Add a script end marker log message (e.g., "================ Script Ended (via Finally) ================.")
  • [✓] Step ELR-4: Modify the existing catch block for the main loop:
    • [✓] Update its log message to indicate that the 'stopped' state will be handled by the finally block (e.g., "Critical error in main loop: $_. Script will attempt to report 'stopped' state in finally block.").
    • [✓] Remove the conditional Send-StateReport -State "stopped"call from thecatchblock, as this is now handled by thefinally block.
  • [✓] Step ELR-5: Remove the redundant Write-Log "================ Script Ended Gracefully ================"line from the end of the script, as thefinally block now handles the definitive script end logging.

Task: AlignReportPs1ToMinuteLoggingAndSpec Modify report.ps1 to ensure it sends user status ("working" or "stopped") logs approximately every minute, align its default idle threshold with project specifications, and update README.md for configuration guidance.

  • [✓] Step ALIGN-1: In report.ps1, confirm the default for $pollIntervalSeconds is 60. If not, set it to 60. Add/update a comment to clarify its purpose (e.g., "# How often to check user activity (in seconds).").
  • [✓] Step ALIGN-2: In report.ps1, confirm the default for $reportIntervalMinutes is 1. If not, set it to 1. Add/update a comment to clarify its purpose and how it enables minute-by-minute "working" state reporting (e.g., "# How often to send a status update if state hasn't changed (in minutes). With a 1-minute poll interval, this ensures 'working' state is reported every minute. State changes are reported immediately.").
  • [✓] Step ALIGN-3: In report.ps1, change the default value of $IdleThresholdMinutes from 15 to 5. This aligns with project_config.md("Idle Detection Logic: Fixed inactivity threshold of 5 minutes") andREADME.md examples. Add/update a comment for this variable (e.g., "# User idle time in minutes before state changes to 'stopped'.").
  • [✓] Step ALIGN-4: In README.md, update the client-side config.envexample under "Installation Instructions" -> "Client-Side Setup" -> "Configure" to includeREPORT_INTERVAL_MINUTES="1". Ensure IDLE_THRESHOLD_MINUTESis shown as "5" andPOLL_INTERVAL_SECONDS as "60".
    • The config.env block in README.md should be updated to:
      API_ENDPOINT="http://your-server-address:5000/api/report"
      IDLE_THRESHOLD_MINUTES="5"
      POLL_INTERVAL_SECONDS="60"
      REPORT_INTERVAL_MINUTES="1"
      

Task: AdvancedRefactoringAndDocumentation Refactor exception handling, improve code clarity and naming, and create a comprehensive README.md.

Phase 1: Code Refactoring

  • Sub-Task 1: Refine Exception Handling (REH)
    • [✓] Step REH-1: Analyze app/api/events.py. Replace generic except Exception blocks with specific exception types (e.g., SQLAlchemyError, ValueError, TypeError). Ensure error responses are informative.
    • [✓] Step REH-2: Analyze app/api/reports.py. Apply similar specific exception handling. Pay attention to potential errors during database queries or data processing.
    • [✓] Step REH-3: Analyze app/utils/queries.py and app/utils/formatting.py. Ensure any potential errors are handled gracefully or documented.
    • [✓] Step REH-4: Analyze run.py and app/__init__.py. Review exception handling during app initialization and configuration.
  • Sub-Task 2: Enhance Code Clarity and Naming (ECN) (Concurrent with Sub-Task 1)
    • [✓] Step ECN-1: Review app/api/events.py for clarity, naming, docstrings, and comments.
    • [✓] Step ECN-2: Review app/api/reports.py for clarity, naming, docstrings, and comments.
    • [✓] Step ECN-3: Review app/utils/queries.py for clarity, naming, docstrings, and comments.
    • [✓] Step ECN-4: Review app/utils/formatting.py for clarity, naming, docstrings, and comments.
    • [✓] Step ECN-5: Review app/views/dashboard.py for clarity, naming, docstrings, and comments.
    • [✓] Step ECN-6: Review app/models.py for clarity, naming, docstrings, and comments.
    • [✓] Step ECN-7: Review run.py and app/__init__.py for clarity, naming, docstrings, and comments.
    • [✓] Step ECN-8: Briefly review static/js/ submodules for major clarity/naming issues.

Phase 2: README Creation

  • Sub-Task 3: Create Comprehensive README.md (DOC)
    • [✓] Step DOC-1: Delete the existing README.md file.
    • [✓] Step DOC-2: Create a new README.md file.
    • [✓] Step DOC-3: Draft "Project Title" and "Brief Project Description".
    • [✓] Step DOC-4: Draft "Installation Instructions" (Client and Server).
    • [✓] Step DOC-5: Draft "Usage Examples" (API interaction, dashboard access).
    • [✓] Step DOC-6: Draft "Structure of the Project".
    • [✓] Step DOC-7: Draft "Dependencies and Requirements".
    • [✓] Step DOC-8: Draft "Contributing Guidelines".
    • [✓] Step DOC-9: Draft "License Information".
    • [✓] Step DOC-10: Review and refine the complete README.md.

Task: ModifyReportingDashboard (Completed Task) Modify the reporting and dashboard to show aggregated active working time duration in simple tables.

  • [✓] Step Mod-1: Define new SQL queries in app.py to calculate daily, weekly, and monthly working durations per user using LEAD() and JULIANDAY(), aggregating with SUM().
  • [✓] Step Mod-2: Update Flask endpoints in app.py:
    • [✓] Step Mod-2.1: Modify /api/reports/daily to use the new daily duration query.
    • [✓] Step Mod-2.2: Create /api/reports/weekly using a new weekly duration query.
    • [✓] Step Mod-2.3: Create /api/reports/monthly using a new monthly duration query.
    • [✓] Step Mod-2.4: Ensure endpoints return JSON data formatted for table display (e.g., list of dicts with user, period, duration_hours).
  • [✓] Step Mod-3: Update templates/dashboard.html:
    • [✓] Step Mod-3.1: Remove Chart.js script inclusion and chart-related HTML elements.
    • [✓] Step Mod-3.2: Add JavaScript to fetch data from the new/updated API endpoints.
    • [✓] Step Mod-3.3: Create HTML tables to display the fetched duration data (User, Period, Duration).

Task: AddExtensiveLogging (Completed Task) Add extensive file-based logging to both the Flask server and the PowerShell client.

  • [✓] Step Log-1: Configure Flask Logging (app.py):
    • [✓] Step Log-1.1: Import loggingandlogging.handlers.
    • [✓] Step Log-1.2: In create_appor equivalent setup location: Configure aRotatingFileHandlerto write to