26 KiB
Workflow State & Rules (STM + Rules + Log)
This file contains the dynamic state, embedded rules, active plan, and log for the current session. It is read and updated frequently by the AI during its operational loop.
State
Holds the current status of the workflow.
Phase: IDLE
Status: TASK_COMPLETED
CurrentTaskID: ChangeIdleThreshold
CurrentStep: ""
CurrentItem: ""
Plan
Contains the step-by-step implementation plan generated during the BLUEPRINT phase.
Task: ChangeIdleThreshold Change the idle threshold from 1 to 10 minutes across the application to determine when a user is considered inactive.
-
[✓] Step CIT-1: Update client-side PowerShell script[✓] Modifyclient_tools/report.ps1:[✓] Locate the variable$IdleThresholdMinutesand change its default value from 1 to 10[✓] Update any related comments to reflect the new threshold
-
[✓] Step CIT-2: Update client-side configuration documentation[✓] ModifyREADME.md:[✓] Update the exampleconfig.envunder "Client-Side Setup" to showIDLE_THRESHOLD_MINUTES="10"`[✓] Update any related documentation about idle detection to reference 10 minutes instead of 5
-
[✓] Step CIT-3: Update server-side code references[✓] Check for any server-side references to the idle threshold value in the Flask application[✓] Updated the auto_timeout_seconds in app/api/reports.py from 6 minutes to 10 minutes
-
[✓] Step CIT-4: Update project configuration documentation[✓] Modifyproject_config.md:[✓] Update "Idle Detection Logic" to reference 10 minutes instead of 5 minutes
-
[✓] Step CIT-5: Verify configuration consistency[✓] Ensure all references to the idle threshold are consistently set to 10 minutes across the application[✓] Updated app.py fetch_user_activity function to use 10-minute idle threshold[✓] Updated comments in report.ps1 about periodic reporting
Task: ChangeRealWorkHoursToProductionLogic Change the test logic in the real work hours tracking system to production logic, where every consecutive 40 minutes (not 2 minutes) counts as 1 hour (not 3 hours).
-
Step CRWH-1: Update the work_hours_service.py file
- Modify app/services/work_hours_service.py:
- Change
if consecutive_working_minutes == 2:toif consecutive_working_minutes == 40: - Change
logged_hours = 3tologged_hours = 1 - Remove "TEST LOGIC:" prefix from log message at line 158
- Remove "TEST LOGIC" reference in the final completion log message at line 260
-
Step CRWH-2: Update the database schema
- Modify database_utils/001_create_user_real_work_summary.sql:
- Add the
last_event_completed_block BOOLEAN NOT NULL DEFAULT FALSEcolumn to the table definition
-
Step CRWH-3: Verify changes
- Verify that all code changes are consistent
- Make sure all references to 40-minute blocks and real work hours are accurate
Task: ImplementRealWorkHoursTracking
Create user_real_work_summary table. Implement an automated, scheduled task within Flask (using a library like APScheduler) to populate this table with 'real work hours' (40 consecutive 'working' minutes = 1 hour), with an optional CLI command for manual triggering/testing. Create a new API endpoint to serve this data. Update the frontend dashboard to display these 'real work hours' instead of the previous duration calculation.
- Step IRWHT-1: Database Setup
- Create a new SQL file (e.g.,
database_utils/001_create_user_real_work_summary.sql) with theCREATE TABLE IF NOT EXISTS user_real_work_summary (id SERIAL PRIMARY KEY, username VARCHAR(255) NOT NULL, work_date DATE NOT NULL, real_hours_counted INTEGER NOT NULL DEFAULT 0, last_processed_event_id INTEGER, CONSTRAINT uq_user_work_date UNIQUE (username, work_date));statement. - Update
app/models.pyto add a new SQLAlchemy model classUserRealWorkSummarymapping to this table. - Update
README.mdto include instructions for applying this new SQL script. Determine if this script should be incorporated into theflask init-dbcommand or remain a manual step.
- Create a new SQL file (e.g.,
- Step IRWHT-2 (Revised - In-App Scheduler): Implement Automated Work Hour Calculation Task
- Add
APSchedulertorequirements.txt. - Create a new Python module for the calculation logic (e.g.,
app/services/work_hours_service.pyorapp/tasks/calculator.py). - Implement the core function,
calculate_and_store_real_work_hours(), in this module. This function will:- Fetch unprocessed
WorkEventrecords (efficiently usinglast_processed_event_idfromUserRealWorkSummaryfor each user to avoid reprocessing all events). - Iterate through these events on a per-user basis, ordered by timestamp.
- Carefully track sequences of 40 consecutive 'working' state events (where each event represents approximately 1 minute of work).
- Perform UPSERT (insert or update) operations on the
UserRealWorkSummarytable to incrementreal_hours_countedand updatelast_processed_event_idfor the correspondingusernameandwork_date. - Ensure the logic correctly handles the
work_datederivation from event timestamps and manageslast_processed_event_idfor robust incremental processing.
- Fetch unprocessed
- Integrate and configure
APSchedulerwithin the Flask application (e.g., inapp/__init__.pyor a dedicatedapp/scheduler.pymodule that is initialized by the app factory). - Define a scheduled job within APScheduler to call the
calculate_and_store_real_work_hours()function periodically (e.g., every 10-15 minutes. The exact interval to be determined based on desired freshness vs. server load). - Create a Flask CLI command (e.g., in
app/cli.py, namedflask process-real-hours) that directly calls thecalculate_and_store_real_work_hours()function. This command is for manual execution (testing, initial backfilling, ad-hoc runs).
- Add
- Step IRWHT-3 (Revised): Create New Backend API Endpoint for Real Work Hours
- In
app/api/reports.py(or a new relevant API file if preferred, e.g.,app/api/real_time_reports.py), define a new API endpoint (e.g.,/api/reports/real_work_hours). - This endpoint should accept parameters such as
username(optional filter),start_date, andend_date(or a period parameter like 'today', 'current_week', 'current_month' for predefined ranges). - The endpoint's logic will query the
user_real_work_summarytable (using theUserRealWorkSummarymodel) to fetchreal_hours_countedfor the specified users and timeframes. - It should return the data in a JSON format suitable for frontend consumption (e.g., a list of dictionaries:
[{'user': 'name', 'date': 'YYYY-MM-DD', 'real_hours_counted': X}, ...]or aggregated as per the period requested).
- In
- Step IRWHT-4 (Revised): Update Frontend to Display Real Work Hours
- In
static/js/dashboard.js(ortableManager.jsor other relevant JavaScript modules):- Modify the existing JavaScript functions responsible for fetching and displaying what was previously work duration.
- These functions should now be updated to call the new
/api/reports/real_work_hoursendpoint. - The data parsing logic within these JavaScript functions must be updated to correctly use the
real_hours_countedfield from the new endpoint's JSON response.
- In
templates/dashboard.html:- Update table headers (e.g., from "Duration (Hours)" to "Real Work Hours" or "Focused Work Blocks").
- Update any other descriptive text on the dashboard if necessary to accurately reflect that the new "real work hours" metric is being displayed.
- Ensure the table columns correctly bind to and display the new data structure provided by the updated JavaScript logic.
- In
- Step IRWHT-5: Documentation Updates
- Update
README.mdto:- Add
APSchedulerto the list of dependencies inrequirements.txtsection. - Describe the new
user_real_work_summarytable, its columns, and its specific purpose in tracking 40-minute focused work blocks. - Document the automated scheduled task (managed by APScheduler) for processing real work hours, mentioning its approximate frequency and its role in keeping
user_real_work_summaryup-to-date. - Document the Flask CLI command (
flask process-real-hours) for manual/testing/backfilling. - Document the new
/api/reports/real_work_hoursAPI endpoint, including its accepted parameters (e.g.,username,start_date,end_date,period) and the structure of its JSON response. - Update any frontend/dashboard usage descriptions or screenshots to reflect the change in how work hours are presented (now showing "real work hours").
- Add
- Update
- Step IRWHT-6: Testing and Verification
- Thoroughly test the
calculate_and_store_real_work_hours()function directly and via the CLI command (flask process-real-hours). Prepare diverse samplework_eventsdata. - Verify that the
user_real_work_summarytable is populated accurately. - Test incremental processing using
last_processed_event_id. - Verify APScheduler is correctly configured and the scheduled job triggers the calculation function as expected (e.g., by checking logs or observing table updates in a development environment with a short interval).
- Test the new
/api/reports/real_work_hoursAPI endpoint. - Verify frontend dashboard updates.
- Thoroughly test the
Phase: CONSTRUCT
Status: AWAITING_USER_VALIDATION
CurrentTaskID: ImplementRealWorkHoursTracking
CurrentStep: IRWHT-6
CurrentItem: "User Testing and Verification"
Task: OrganizeProjectRootAndSimplifyStartup Clean up the project's root directory by removing unused scripts, reorganizing existing files into appropriate subdirectories, and providing a single, enhanced script for starting the application.
[✓] Step OPRS-1: Analyze Root Directory Files and Plan Relocations/Deletions- Identify files in the root directory (e.g.,
check_db.py,config.env,create_db.sql,ecosystem.config.js,fix_task.cmd,report.ps1,run_hidden.vbs,schedule_task.ps1,start_app.sh). - Determine which files to keep in root, move to new/existing subdirectories, or delete.
- Specifically:
- Keep in root:
config.env,ecosystem.config.js,README.md,requirements.txt,run.py,project_config.md,workflow_state.md,.gitignore,.cursorignore. - Create
client_tools/directory. Movereport.ps1,schedule_task.ps1,run_hidden.vbsintoclient_tools/. - Create
database_utils/directory. Movecreate_db.sqlintodatabase_utils/. - Mark
check_db.pyandfix_task.cmdfor deletion after a brief content review to confirm redundancy/obsolescence.
- Keep in root:
- Identify files in the root directory (e.g.,
[✓] Step OPRS-2: Review Content of Potentially Unused Scripts- Read
check_db.pyto confirm its functionality is covered byflask init-dbor app's auto-init. - Read
fix_task.cmdto understand its purpose and determine if it's still needed.
- Read
[✓] Step OPRS-3: Implement File Deletions[✓] Deletecheck_db.pyif confirmed redundant.[✓] Deletefix_task.cmdif confirmed obsolete or unused.
[✓] Step OPRS-4: Implement File Relocations[✓] Create the directoryclient_tools.[✓] Movereport.ps1toclient_tools/report.ps1.[✓] Moveschedule_task.ps1toclient_tools/schedule_task.ps1.[✓] Moverun_hidden.vbstoclient_tools/run_hidden.vbs.[✓] Create the directorydatabase_utils.[✓] Movecreate_db.sqltodatabase_utils/create_db.sql.
[✓] Step OPRS-5: Enhancestart_app.shas the Single Startup Script[✓] Read the currentstart_app.sh.[✓] Modifystart_app.sh` to:[✓] Check for and activate the virtual environment (venv).[✓] Prompt the user to choose between 'development' or 'production' mode (or detect via an argument).[✓] If 'development', runpython run.py.[✓] If 'production', rungunicorn -w 4 -b 0.0.0.0:5000 "app:create_app()".[✓] Include basic error checking (e.g., venv not found,run.pynot found).
[✓] Step OPRS-6: UpdateREADME.md``[✓] Modify the "Project Structure" section to reflect the newclient_tools/anddatabase_utils/directories and the removal/relocation of files.[✓] Update "Installation Instructions" and "Usage Examples" to refer tostart_app.shas the primary way to run the server.[✓] Ensure client-side setup instructions correctly point to files now inclient_tools/`.
[✓] Step OPRS-7: Update.gitignore``[✓] Ensureclient_tools/anddatabase_utils/are tracked (i.e., not in.gitignore).[✓] Verifyinstance/,venv/,pycache/,*.pycremain ignored.
[✓] Step OPRS-8: Final Review and State Update[✓] Review all changes for consistency and correctness.[✓] Updateworkflow_state.mdtoPhase: IDLE,Status: TASK_COMPLETEDforOrganizeProjectRootAndSimplifyStartup`.
[✓] Step CAPT5-6: Check and updateproject_config.mdif it specifies the port. (No hardcoded port found, no change needed).[✓] Step CAPT5-7: Ask user to try running ./start_app.sh again and confirm functionality on port 5050.
Task: DiagnoseAndFixRuntimeModuleError
Diagnose and resolve the ModuleNotFoundError: No module named 'dotenv' when running start_app.sh.
[✓] Step DFRME-1: Log the error and identify the likely cause as missing dependencies in the virtual environment.[✓] Step DFRME-2: Instruct the user to activate the virtual environment and runpip install -r requirements.txtto ensure all dependencies are correctly installed.[✓] Step DFRME-3: User confirmed re-running./start_app.shdid not resolve the issue.[✓] Step DFRME-4: Verifypython-dotenvis inrequirements.txt.[✓] Readrequirements.txt. Confirmpython-dotenv` is listed.
[✓] Step DFRME-5: Modifystart_app.shfor more diagnostics.[✓] Inside the "development mode" block, beforepython run.pyorpython3 run.py`:[✓] Addecho "Which python: $(which python)"`[✓] Addecho "Which python3: $(which python3)"`[✓] Addecho "PYTHONPATH: $PYTHONPATH"`[✓] Addecho "Active Python version: $(python --version || python3 --version)"`[✓] Addecho "Attempting to import dotenv directly via command line..."`[✓] Addpython -c "import dotenv; print('dotenv path:', dotenv.file)" || python3 -c "import dotenv; print('dotenv path:', dotenv.file)"`
[✓] Step DFRME-6: User re-ran./start_app.shand provided the new diagnostic output.[✓] Step DFRME-7: Analyze diagnostic output and propose further steps.[✓] Output shows venv Python is used but still can't import dotenv.[✓] User explicitly usedvenv/bin/pip install -r requirements.txt.[✓] User ranvenv/bin/pip show python-dotenvand provided output.[✓] User tried./start_app.shagain, issue persists.[✓] Identified Python version mismatch: venv is Python 3.12, but pip installed to a python3.11 site-packages directory within venv.
[✓] Step DFRME-8: Recreate virtual environment with consistent Python version.[✓] Instruct user to deactivate and delete the currentvenvdirectory.[✓] Instruct user to recreate venv usingpython3.11 -m venv venv(user specifiedpython3.11).[✓] Instruct user to activate the new venv.[✓] Instruct user to runpip install -r requirements.txtinside the new venv.
[✓] Step DFRME-9: User re-ran./start_app.sh;dotenverror resolved, newImportErrorappeared.
Task: FixImportErrorInUtils
Resolve the ImportError: cannot import name 'filter_sql_by_user' from 'app.utils.queries'.
[✓] Step FIEIU-1: Log the new ImportError.[✓] Step FIEIU-2: Readapp/utils/queries.pyto check for the definition offilter_sql_by_user. (Confirmed it was removed).[✓] Step FIEIU-3: Readapp/utils/init.pyto examine the import statement.[✓] Step FIEIU-4: Based on findings, either correct the function name inqueries.pyorinit.py, or add the function definition if missing, or adjust the import statement.[✓] Confirmedfilter_sql_by_useris obsolete.[✓] Removefilter_sql_by_userfrom the import statement inapp/utils/init.py.[✓] Removefilter_sql_by_userfrom thealllist inapp/utils/init.py.
[✓] Step FIEIU-5: User ran./start_app.sh;ImportErrorresolved, but port 5000 is in use.
Task: SyncFrontendWithBackendLogic Adjust frontend JavaScript to align with backend data availability and reporting frequency, primarily by tuning auto-refresh intervals and verifying state display logic.
[✓] Step SFBL-1: Analyzestatic/js/autoRefresh.js. Determine the current refresh intervals used for fetching user states (viauserStates.jsand/api/user_states) and main report data (e.g., daily/weekly/monthly views).[✓] Step SFBL-2: Modifystatic/js/autoRefresh.jsto set the primary refresh interval to 60 seconds (60000 milliseconds). This interval should apply to both user state updates and the main report data tables. This change aligns the frontend refresh rate with the 1-minute reporting interval ofreport.ps1, ensuring the dashboard displays near real-time information. Add comments to the code explaining this alignment.[✓] Step SFBL-3: Analyzestatic/js/userStates.js. Verify that it correctly interprets the status provided by the/api/user_statesendpoint and updates the UI accordingly. Confirm there are no frontend-specific assumptions about state transitions or timings that might conflict with the more frequent and reliable data now coming from the client. (This is primarily a check; fixes will only be applied if clear discrepancies or outdated logic are found).[✓] Step SFBL-4: Analyzestatic/js/tableManager.js. Confirm that it directly displays aggregated data (likeduration_hours,first_login_time) as provided by the backend API endpoints, without performing additional calculations or interpretations that could lead to discrepancies. (This is primarily a check; fixes will only be applied if clear discrepancies or outdated logic are found).
Task: EnsureLogoffReport
Modify report.ps1 to reliably send a "stopped" status update when the script terminates, such as during user logoff.
[✓] Step ELR-1: Analyzereport.ps1. Locate the main monitoring loop (while ($true)) and its existingtry...catchstructure.[✓] Step ELR-2: Add afinallyblock to thetry...catchstructure that encompasses the main loop.[✓] Step ELR-3: Inside the newfinallyblock:[✓] Add a log message indicating the script is exiting (e.g., "Script ending (e.g., user logoff, task stop, or after error). Attempting to report 'stopped' state.").[✓] Unconditionally callSend-StateReport -State "stopped".[✓] Add a log message confirming the final report attempt (e.g., "Final 'stopped' state report attempt made.").[✓] Add a script end marker log message (e.g., "================ Script Ended (via Finally) ================.")
[✓] Step ELR-4: Modify the existingcatchblock for the main loop:[✓] Update its log message to indicate that the 'stopped' state will be handled by thefinallyblock (e.g., "Critical error in main loop: $_. Script will attempt to report 'stopped' state in finally block.").[✓] Remove the conditionalSend-StateReport -State "stopped"call from thecatchblock, as this is now handled by thefinallyblock.
[✓] Step ELR-5: Remove the redundantWrite-Log "================ Script Ended Gracefully ================"line from the end of the script, as thefinallyblock now handles the definitive script end logging.
Task: AlignReportPs1ToMinuteLoggingAndSpec
Modify report.ps1 to ensure it sends user status ("working" or "stopped") logs approximately every minute, align its default idle threshold with project specifications, and update README.md for configuration guidance.
[✓] Step ALIGN-1: In report.ps1, confirm the default for $pollIntervalSeconds is 60. If not, set it to 60. Add/update a comment to clarify its purpose (e.g., "# How often to check user activity (in seconds).").[✓] Step ALIGN-2: In report.ps1, confirm the default for $reportIntervalMinutes is 1. If not, set it to 1. Add/update a comment to clarify its purpose and how it enables minute-by-minute "working" state reporting (e.g., "# How often to send a status update if state hasn't changed (in minutes). With a 1-minute poll interval, this ensures 'working' state is reported every minute. State changes are reported immediately.").[✓] Step ALIGN-3: In report.ps1, change the default value of $IdleThresholdMinutes from 15 to 5. This aligns withproject_config.md("Idle Detection Logic: Fixed inactivity threshold of 5 minutes") andREADME.mdexamples. Add/update a comment for this variable (e.g., "# User idle time in minutes before state changes to 'stopped'.").[✓] Step ALIGN-4: In README.md, update the client-sideconfig.envexample under "Installation Instructions" -> "Client-Side Setup" -> "Configure" to includeREPORT_INTERVAL_MINUTES="1". EnsureIDLE_THRESHOLD_MINUTESis shown as "5" andPOLL_INTERVAL_SECONDSas "60".- The
config.envblock inREADME.mdshould be updated to:API_ENDPOINT="http://your-server-address:5000/api/report" IDLE_THRESHOLD_MINUTES="5" POLL_INTERVAL_SECONDS="60" REPORT_INTERVAL_MINUTES="1"
- The
Task: AdvancedRefactoringAndDocumentation Refactor exception handling, improve code clarity and naming, and create a comprehensive README.md.
Phase 1: Code Refactoring
- Sub-Task 1: Refine Exception Handling (REH)
[✓] Step REH-1: Analyze app/api/events.py. Replace generic except Exception blocks with specific exception types (e.g., SQLAlchemyError, ValueError, TypeError). Ensure error responses are informative.[✓] Step REH-2: Analyze app/api/reports.py. Apply similar specific exception handling. Pay attention to potential errors during database queries or data processing.[✓] Step REH-3: Analyze app/utils/queries.py and app/utils/formatting.py. Ensure any potential errors are handled gracefully or documented.[✓] Step REH-4: Analyze run.py and app/__init__.py. Review exception handling during app initialization and configuration.
- Sub-Task 2: Enhance Code Clarity and Naming (ECN) (Concurrent with Sub-Task 1)
[✓] Step ECN-1: Review app/api/events.py for clarity, naming, docstrings, and comments.[✓] Step ECN-2: Review app/api/reports.py for clarity, naming, docstrings, and comments.[✓] Step ECN-3: Review app/utils/queries.py for clarity, naming, docstrings, and comments.[✓] Step ECN-4: Review app/utils/formatting.py for clarity, naming, docstrings, and comments.[✓] Step ECN-5: Review app/views/dashboard.py for clarity, naming, docstrings, and comments.[✓] Step ECN-6: Review app/models.py for clarity, naming, docstrings, and comments.[✓] Step ECN-7: Review run.py and app/__init__.py for clarity, naming, docstrings, and comments.[✓] Step ECN-8: Briefly review static/js/ submodules for major clarity/naming issues.
Phase 2: README Creation
- Sub-Task 3: Create Comprehensive README.md (DOC)
[✓] Step DOC-1: Delete the existing README.md file.[✓] Step DOC-2: Create a new README.md file.[✓] Step DOC-3: Draft "Project Title" and "Brief Project Description".[✓] Step DOC-4: Draft "Installation Instructions" (Client and Server).[✓] Step DOC-5: Draft "Usage Examples" (API interaction, dashboard access).[✓] Step DOC-6: Draft "Structure of the Project".[✓] Step DOC-7: Draft "Dependencies and Requirements".[✓] Step DOC-8: Draft "Contributing Guidelines".[✓] Step DOC-9: Draft "License Information".[✓] Step DOC-10: Review and refine the complete README.md.
Task: ModifyReportingDashboard (Completed Task) Modify the reporting and dashboard to show aggregated active working time duration in simple tables.
[✓] Step Mod-1: Define new SQL queries in app.py to calculate daily, weekly, and monthly working durations per user using LEAD() and JULIANDAY(), aggregating with SUM().[✓] Step Mod-2: Update Flask endpoints in app.py:[✓] Step Mod-2.1: Modify /api/reports/daily to use the new daily duration query.[✓] Step Mod-2.2: Create /api/reports/weekly using a new weekly duration query.[✓] Step Mod-2.3: Create /api/reports/monthly using a new monthly duration query.[✓] Step Mod-2.4: Ensure endpoints return JSON data formatted for table display (e.g., list of dicts with user, period, duration_hours).
[✓] Step Mod-3: Update templates/dashboard.html:[✓] Step Mod-3.1: Remove Chart.js script inclusion and chart-related HTML elements.[✓] Step Mod-3.2: Add JavaScript to fetch data from the new/updated API endpoints.[✓] Step Mod-3.3: Create HTML tables to display the fetched duration data (User, Period, Duration).
Task: AddExtensiveLogging (Completed Task) Add extensive file-based logging to both the Flask server and the PowerShell client.
[✓] Step Log-1: Configure Flask Logging (app.py):[✓] Step Log-1.1: Importloggingandlogging.handlers.[✓] Step Log-1.2: Increate_appor equivalent setup location: Configure aRotatingFileHandlerto write to