first commit

This commit is contained in:
ilia.gurielidze 2025-05-16 17:55:30 +04:00
commit d45adecd1c
44 changed files with 7004 additions and 0 deletions

7
.cursorignore Normal file
View File

@ -0,0 +1,7 @@
venv/
__pycache__/
*.pyc
*.pyo
*.pyd
*.pyw
*.pyz

6
.gitignore vendored Normal file
View File

@ -0,0 +1,6 @@
venv/
__pycache__/
*.pyc
*.log
*.env
instance/

427
README.md Normal file
View File

@ -0,0 +1,427 @@
# Employee Workstation Activity Tracking System
A system for logging user activity (login events, active/inactive periods) on Windows workstations and reporting these events to a central Flask server. The server provides a web dashboard for viewing daily, weekly, and monthly aggregated working time summaries.
## Installation Instructions
### Client-Side Setup (Windows Workstations)
**Prerequisites:**
* Windows 10 or Windows 11
* PowerShell 3.0 or higher
* Administrative privileges (for setting up the scheduled task)
**Steps:**
1. **Copy Script:** Transfer the `report.ps1` script to a suitable location on the client workstation (e.g., `C:\Scripts\UserActivityTracker\`).
* The `report.ps1` script and `schedule_task.ps1` can be found in the `client_tools/` directory of the project.
2. **Configure:**
* Create a `config.env` file in the same directory as `report.ps1` with the following content:
```env
API_ENDPOINT="http://your-server-address:5050/api/report"
IDLE_THRESHOLD_MINUTES="10"
POLL_INTERVAL_SECONDS="60"
REPORT_INTERVAL_MINUTES="1"
```
Replace `http://your-server-address:5050/api/report` with the actual URL of your server's reporting endpoint.
* Alternatively, these values can be set as system-wide environment variables on the workstation.
3. **Schedule Task:**
* Copy the `schedule_task.ps1` script (from the `client_tools/` directory) to the workstation.
* Open PowerShell as an Administrator.
* Navigate to the directory where you saved `schedule_task.ps1`.
* Run the script, providing the path to `report.ps1`:
```powershell
.\schedule_task.ps1 -ScriptPath "C:\Scripts\UserActivityTracker\report.ps1"
```
This will create a scheduled task that runs `report.ps1` when any user logs on.
### Server-Side Setup
**Prerequisites:**
* Python 3.9 or higher
* `pip` (Python package installer)
* A database system: SQLite (for development/testing) or PostgreSQL (recommended for production).
* Git (for cloning the repository)
**Steps:**
1. **Clone Repository:**
```bash
git clone <repository_url> # Replace <repository_url> with the actual URL
cd employee-workstation-activity-tracking # Or your project directory name
```
2. **Create Virtual Environment:**
```bash
python -m venv venv
```
3. **Activate Virtual Environment:**
* On Windows:
```cmd
.\venv\Scripts\activate
```
* On macOS/Linux:
```bash
source venv/bin/activate
```
4. **Install Dependencies:**
```bash
pip install -r requirements.txt
```
5. **Configure Application:**
* Create a `config.env` file in the project root directory (e.g., alongside `run.py`).
* Add the following necessary configurations:
```env
# Flask Secret Key (change this to a random string for production)
SECRET_KEY="your_very_secret_flask_key"
# Database URI
# For SQLite (creates a file in the 'instance' folder, good for development):
DATABASE_URI="sqlite:///../instance/work_events.db"
# For PostgreSQL (replace with your actual connection details):
# DATABASE_URI="postgresql://username:password@host:port/database_name"
# Optional: Define HOST, PORT, DEBUG for Flask development server
# HOST="0.0.0.0"
# PORT="5050"
# DEBUG="True"
```
**Important:**
* Replace `your_very_secret_flask_key` with a strong, random string for production.
* Choose and configure either SQLite or PostgreSQL for `DATABASE_URI`.
* The `instance` folder will be created automatically by the application if it doesn't exist.
6. **Initialize Database:**
* Ensure your `config.env` is correctly set up with the `DATABASE_URI`.
* The application is designed to create the database and all necessary tables (including `work_events` and `user_real_work_summary`) if they don't exist when it first runs. For explicit control, use the Flask CLI command:
```bash
flask init-db
```
This command should be run from the project root directory with the virtual environment activated. It ensures all tables defined in `app/models.py` are created in the database.
* Alternatively, if you need to set up or inspect the database schema manually (e.g., for PostgreSQL), SQL script files are provided in the `database_utils/` directory:
* `database_utils/create_db.sql` (original schema, may need review if used directly against current models)
* `database_utils/001_create_user_real_work_summary.sql` (for the real work hours summary table)
Example for PostgreSQL:
```bash
psql -U your_pg_user -d your_pg_database -f database_utils/your_script_file.sql
```
It's generally recommended to use `flask init-db` for consistency with the application's models.
## Usage Examples
### Running the Server
Ensure your virtual environment is activated before running the server.
Use the `start_app.sh` script located in the project root:
```bash
./start_app.sh [mode]
```
* **`[mode]`** is optional.
* If omitted or `dev` or `development`, the server starts in development mode (using `python run.py`).
* If `prod` or `production`, the server starts in production mode (using Gunicorn).
**Example for Development:**
```bash
./start_app.sh dev
# or simply
./start_app.sh
```
**Example for Production:**
```bash
./start_app.sh prod
```
The application will typically start on `http://0.0.0.0:5050/` or `http://127.0.0.1:5050/` by default. Check the terminal output for the exact address.
The script will attempt to activate the `venv` virtual environment.
### Accessing the Dashboard
Once the server is running, open a web browser and navigate to the server's address (e.g., `http://your-server-ip:5050/` or `http://localhost:5050/` if running locally).
The dashboard provides views for daily, weekly, and monthly user activity summaries. It now displays "Real Work Hours," which are calculated based on continuous 40-minute blocks of 'working' activity reported by the client.
### Client-Side Activity Reporting
The `report.ps1` script, once configured and scheduled on a Windows workstation, runs automatically at user logon. It monitors user idle time based on the `IDLE_THRESHOLD_MINUTES` setting (default 10 minutes) and polls at intervals defined by `POLL_INTERVAL_SECONDS` (default 60 seconds).
When a user's state changes (e.g., from active to idle/stopped, or idle to active/working), the script sends an HTTP POST request to the server's `/api/report` endpoint.
**Example API Payload Sent by Client:**
```json
{
"user": "domain\\username",
"state": "working",
"ts": "2024-05-10T10:20:30Z"
}
```
* `user`: The Windows username (often including domain).
* `state`: Either `"working"` or `"stopped"`.
* `ts`: An optional ISO 8601 UTC timestamp. If not provided by the client, the server records the event with its current UTC time.
### Manual API Interaction (for testing)
You can also send POST requests to the `/api/report` endpoint using tools like `curl` or Postman for testing purposes.
**Example using `curl` for raw event reporting:**
```bash
curl -X POST -H "Content-Type: application/json" \
-d '{"user":"testuser","state":"working","ts":"2024-01-01T12:00:00Z"}' \
http://localhost:5050/api/report
```
Successful requests will receive a JSON response like `{"success": true}` with an HTTP 201 status code.
**New API Endpoint for Real Work Hours Data:**
A new endpoint `/api/reports/real_work_hours` is available to fetch the calculated "Real Work Hours".
* **Method:** `GET`
* **Query Parameters:**
* `username` (string, optional): Filter results by a specific username.
* `start_date` (string, optional): Start date for the report period (YYYY-MM-DD).
* `end_date` (string, optional): End date for the report period (YYYY-MM-DD). If `start_date` is provided and `end_date` is omitted, `end_date` defaults to `start_date`.
* **Example `curl`:**
```bash
curl -X GET "http://localhost:5050/api/reports/real_work_hours?username=testuser&start_date=2024-01-01&end_date=2024-01-31"
```
* **Example Response:**
```json
{
"success": true,
"data": [
{
"id": 1,
"username": "testuser",
"work_date": "2024-01-15",
"real_hours_counted": 3,
"last_processed_event_id": 12345
},
// ... more records
]
}
```
## Project Structure
The project is organized as follows:
```text
work-tracing/
├── app/ # Main Flask application package
│ ├── __init__.py # Application factory (create_app), blueprint registration
│ ├── api/ # REST API Blueprints and modules
│ │ ├── __init__.py
│ │ ├── events.py # Endpoints for event reporting
│ │ └── reports.py # Endpoints for data retrieval
│ ├── models.py # SQLAlchemy database models (includes WorkEvent, UserRealWorkSummary)
│ ├── utils/ # Utility modules (query building, formatting)
│ │ ├── __init__.py
│ │ ├── formatting.py
│ │ └── queries.py
│ ├── views/ # Web page view Blueprints
│ │ ├── __init__.py
│ │ └── dashboard.py
│ ├── errors.py # Custom error handlers registration
│ ├── scheduler.py # APScheduler setup and job definitions
│ ├── services/ # Service layer for business logic (e.g., work_hours_service.py)
│ └── cli.py # Custom Flask CLI commands (e.g., process-real-hours)
├── client_tools/ # Scripts and utilities for client-side setup
│ ├── report.ps1 # PowerShell script for client activity reporting
│ ├── schedule_task.ps1 # PowerShell script to schedule client agent
│ └── run_hidden.vbs # VBScript to run PowerShell script hidden (optional)
├── database_utils/ # Database-related utility scripts
│ ├── create_db.sql # Manual SQL schema creation script (reference for work_events)
│ └── 001_create_user_real_work_summary.sql # Manual SQL for user_real_work_summary table
├── instance/ # Instance-specific data (SQLite DB, logs), .gitignored
├── static/ # Static files (CSS, JavaScript)
│ ├── css/
│ │ └── dashboard.css
│ └── js/ # JavaScript modules for the dashboard
│ ├── dashboard.js
│ └── ... (other .js files)
├── templates/ # HTML templates
│ └── dashboard.html
├── venv/ # Python virtual environment
├── .cursorignore
├── .flake8
├── .gitignore
├── config.env # Server-side environment configuration. IMPORTANT: .gitignore this if it contains secrets.
├── ecosystem.config.js # PM2 configuration (if used for deployment)
├── README.md # This file
├── requirements.txt # Python dependencies
├── run.py # Application entry point (used by start_app.sh for dev mode)
└── start_app.sh # Primary script to start the application (dev/prod)
```
## Time Synchronization
The application maintains consistent time handling across all components:
1. **Client-side (PowerShell):**
* All timestamps are generated in UTC using `(Get-Date).ToUniversalTime().ToString("o")`
* Timestamps are sent to the server in ISO 8601 format with 'Z' suffix indicating UTC
2. **Backend (Flask):**
* All incoming timestamps are converted to UTC if they have timezone information
* Backend code uses `datetime.utcnow()` for server-generated timestamps
* All time-based calculations are performed in UTC
3. **Database (PostgreSQL/SQLite):**
* For PostgreSQL, the database session timezone is set to 'Asia/Dubai' (UTC+4)
* Timestamps are automatically converted from UTC to Asia/Dubai when stored
* When retrieved, timestamps reflect the Asia/Dubai timezone
4. **Frontend (JavaScript):**
* All displayed times use GMT+4 (Asia/Dubai) via the `formatTimeToGMT4()` function
* Date selection for filtering is in the user's local timezone
* Auto-refresh interval (60 seconds) aligns with client reporting frequency
This approach ensures that all timestamps are consistently handled throughout the application, preventing time drift or synchronization issues between components.
## Dependencies and Requirements
### Client-Side Requirements (Windows Workstations)
* **Operating System:** Windows 10 or Windows 11
* **PowerShell:** Version 3.0 or higher
* **Permissions:** Administrative privileges are required to set up the scheduled task for the client script (`report.ps1`).
### Server-Side Requirements
* **Python:** Version 3.9 or higher.
* **Database:**
* SQLite 3 for development or small-scale deployments.
* PostgreSQL (version 12+ recommended) for production or larger deployments.
* **Networking:** Clients must be able to reach the server over HTTP on the configured port (default 5050) within the local area network (LAN).
* **Python Packages:** The specific Python dependencies are listed in `requirements.txt`. Key packages include:
* Flask (web framework)
* Flask-SQLAlchemy (SQLAlchemy integration for Flask)
* SQLAlchemy (SQL toolkit and Object Relational Mapper)
* Gunicorn (WSGI HTTP server for production on Unix-like systems)
* psycopg2-binary (PostgreSQL adapter for Python, if using PostgreSQL)
* python-dotenv (for managing environment variables from `.env` files)
* requests (for making HTTP requests, used internally by the server)
* APScheduler (for scheduling background tasks, like calculating real work hours)
**Full `requirements.txt`:**
```
Flask==2.3.3
Flask-SQLAlchemy==3.1.1
SQLAlchemy==2.0.20
gunicorn==21.2.0
python-dotenv==1.0.0
psycopg2-binary==2.9.7
pytest==7.4.0
black==23.7.0
flake8==6.1.0
isort==5.12.0
alembic==1.12.0
Werkzeug==2.3.7
Jinja2==3.1.2
itsdangerous==2.1.2
click==8.1.7
requests
APScheduler==3.10.4
```
Installation of these server-side dependencies is typically done by running `pip install -r requirements.txt` within an activated virtual environment.
## Automated Tasks and Manual Operations
### Real Work Hours Calculation
The system includes a new table, `user_real_work_summary`, which stores "Real Work Hours." A "Real Work Hour" is defined as a continuous block of 40 minutes where the user's status is reported as 'working' by the client-side script.
* **Automated Processing:**
* An automated background task, managed by `APScheduler` within the Flask application, runs periodically (e.g., every 15 minutes).
* This task (`calculate_and_store_real_work_hours` located in `app/services/work_hours_service.py`) processes new `work_events` from the `work_events` table.
* It identifies 40-minute 'working' blocks and updates the `real_hours_counted` and `last_processed_event_id` in the `user_real_work_summary` table for each user and date.
* **Manual Trigger / Backfilling:**
* A Flask CLI command is available for manual operations:
```bash
flask process-real-hours
```
* This command executes the same `calculate_and_store_real_work_hours` function.
* It is useful for initial data backfilling if there's a large history of `work_events` before this feature was added, for testing the calculation logic, or for ad-hoc processing if the scheduler was temporarily inactive.
## Contributing Guidelines
We welcome contributions to improve the Employee Workstation Activity Tracking System! To ensure a smooth process, please follow these guidelines.
### Getting Started
1. **Fork the repository** on GitHub (or your Git hosting platform).
2. **Clone your fork** locally: `git clone <your-fork-url>`
3. **Create a feature branch** for your changes: `git checkout -b feat/your-feature-name` or `fix/your-bug-fix`.
### Coding Standards
Consistency helps maintain code quality and readability.
* **Python:**
* Adhere to **PEP 8** style guidelines.
* Use **Black** for code formatting. It's recommended to configure your editor to format on save or run it before committing.
* Use **isort** for organizing imports.
* Use **Flake8** for linting to catch common errors and style issues. Configuration for Flake8 can be found in the `.flake8` file in the project root (e.g., `max-line-length = 88` to align with Black).
* **Before committing Python code, please run these tools from the project root directory:**
```bash
# Ensure your virtual environment (e.g., venv) is activated
black .
isort .
flake8 .
```
Address any critical issues reported by Flake8.
* **PowerShell:**
* Use Verb-Noun function and cmdlet names (e.g., `Get-UserIdleTime`, `Send-ActivityReport`).
* Maintain consistent two-space indentation.
* **JavaScript:**
* Use modern ES6+ syntax.
* Write modular code, leveraging ES6 modules.
* Avoid polluting the global scope.
### Commit Messages
Please follow the **Conventional Commits** specification for your commit messages. This helps in generating automated changelogs and makes the project history more understandable.
Examples:
* `feat: add detailed user activity log endpoint`
* `fix: correct calculation for weekly report aggregation`
* `docs: update installation instructions for PostgreSQL`
* `style: apply Black formatting to all Python files`
* `refactor: improve exception handling in API modules`
* `test: add unit tests for date utility functions`
* `chore: update Gunicorn version in requirements.txt`
### Testing
* If you add new features, please include tests where applicable. This project uses `pytest` for Python testing.
* Ensure all existing tests pass before submitting a pull request.
```bash
# To run tests (ensure pytest is installed and virtual env is active):
pytest
```
### Pull Requests
1. Once your changes are complete and tested, commit them with a clear Conventional Commit message.
2. Push your feature branch to your fork: `git push origin feat/your-feature-name`.
3. Open a **Pull Request (PR)** to the `main` branch of the original repository.
4. In your PR description, clearly explain the changes you've made, why they were made, and reference any related issues (e.g., "Closes #123").
5. Be prepared to discuss your changes and make adjustments if requested during the review process.
Thank you for contributing!
## License
This project is licensed under the MIT License. A copy of the license should ideally be included as a `LICENSE` file in the project's root directory.
If a `LICENSE` file is not present, the general terms of the MIT License can be found at: [https://opensource.org/licenses/MIT](https://opensource.org/licenses/MIT)

959
app.py Normal file
View File

@ -0,0 +1,959 @@
"""
Employee Workstation Activity Tracking - Flask API Server
This Flask application provides a REST API for receiving and storing user activity
events from client workstations. It exposes endpoints for reporting activity state
changes and retrieving aggregated reports.
"""
import os
import logging
from logging.handlers import RotatingFileHandler
from datetime import datetime, timedelta
from dotenv import load_dotenv # Import load_dotenv
from flask import Flask, request, jsonify, render_template
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy.exc import SQLAlchemyError
from sqlalchemy import text, func, case, cast, Integer
import sys
# Load environment variables from config.env file
load_dotenv() # Try .env first
config_env_path = os.path.join(os.path.dirname(__file__), 'config.env')
if os.path.exists(config_env_path):
load_dotenv(config_env_path)
print(f"Loaded environment variables from {config_env_path}")
else:
print(f"Warning: config.env file not found at {config_env_path}")
# Print all DATABASE* environment variables for debugging (masked)
for key, value in os.environ.items():
if key.startswith("DATABASE"):
masked_value = value
if "@" in value:
parts = value.split("@")
masked_value = "****@" + parts[1]
print(f"{key}: {masked_value}")
# Initialize Flask app
app = Flask(__name__, instance_relative_config=True)
# Load configuration
# In production, use environment variables or .env file
app.config.from_mapping(
SECRET_KEY=os.environ.get('SECRET_KEY', 'dev'),
SQLALCHEMY_DATABASE_URI=os.environ.get('DATABASE_URI', os.environ.get('DATABASE_URL')),
SQLALCHEMY_TRACK_MODIFICATIONS=False
)
# Ensure DATABASE_URI is set
if not app.config['SQLALCHEMY_DATABASE_URI']:
raise ValueError("DATABASE_URI or DATABASE_URL environment variable must be set for production mode")
# Ensure the instance folder exists
try:
os.makedirs(app.instance_path)
except OSError:
pass
# Configure logging
log_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
log_handler = RotatingFileHandler(
os.path.join(app.instance_path, 'server.log'),
maxBytes=1024 * 1024 * 5, # 5 MB
backupCount=5
)
log_handler.setFormatter(log_formatter)
log_handler.setLevel(logging.INFO)
if not app.debug: # Avoid duplicate logs in debug mode if Werkzeug logger is also active
app.logger.addHandler(log_handler)
app.logger.setLevel(logging.INFO)
app.logger.info('Flask application starting up...') # Log startup
# Initialize database
db = SQLAlchemy()
db.init_app(app)
# Define database models
class WorkEvent(db.Model):
"""
Represents a user activity event with state transitions (working/stopped).
"""
__tablename__ = 'work_events'
id = db.Column(db.Integer, primary_key=True)
user = db.Column(db.String(100), nullable=False, index=True)
state = db.Column(db.String(10), nullable=False) # 'working' or 'stopped'
ts = db.Column(db.DateTime, nullable=False,
server_default=db.func.current_timestamp(),
index=True)
def __repr__(self):
return f"<WorkEvent(user='{self.user}', state='{self.state}', ts='{self.ts}')>"
def to_dict(self):
"""Convert model to dictionary for API responses"""
return {
'id': self.id,
'user': self.user,
'state': self.state,
'ts': self.ts.isoformat() if self.ts else None
}
# Print model metadata for debugging
app.logger.info(f"WorkEvent __tablename__: {WorkEvent.__tablename__}")
app.logger.info(f"WorkEvent columns: {[column.name for column in WorkEvent.__table__.columns]}")
try:
# Check if there are any attribute mappers or event listeners
app.logger.info(f"WorkEvent attribute names: {dir(WorkEvent)}")
except Exception as e:
app.logger.error(f"Error inspecting WorkEvent class: {str(e)}")
# API Routes
@app.route('/api/report', methods=['POST'])
def report_event():
"""
Endpoint for clients to report activity state changes.
Expected JSON payload:
{
"user": "username",
"state": "working|stopped",
"ts": "2023-07-08T12:30:45Z" (optional, ISO 8601)
}
"""
data = request.get_json()
app.logger.info(f"Received report request: {data}") # Log request
if not data or 'user' not in data or 'state' not in data:
app.logger.warning("Invalid report request payload.")
return jsonify({
'success': False,
'message': 'Missing required fields: user, state'
}), 400
# Validate state value
if data['state'] not in ['working', 'stopped']:
return jsonify({
'success': False,
'message': 'Invalid state value. Must be "working" or "stopped"'
}), 400
# Parse timestamp if provided, otherwise use current time
timestamp = None
if 'ts' in data and data['ts']:
try:
timestamp = datetime.fromisoformat(data['ts'].replace('Z', '+00:00'))
except ValueError:
return jsonify({
'success': False,
'message': 'Invalid timestamp format. Use ISO 8601 (YYYY-MM-DDTHH:MM:SSZ)'
}), 400
# Create and store the event
user = data['user']
state = data['state']
ts_str = data.get('ts') # Optional timestamp
event_ts = datetime.utcnow()
if ts_str:
try:
# Attempt to parse ISO 8601 format
event_ts = datetime.fromisoformat(ts_str.replace('Z', '+00:00'))
except ValueError:
app.logger.warning(f"Invalid timestamp format received: {ts_str}. Using current UTC time.")
# Optionally return an error here if strict timestamp validation is needed
# return jsonify({"success": False, "message": "Invalid timestamp format"}), 400
new_event = WorkEvent(user=user, state=state, ts=event_ts)
try:
app.logger.info(f"Attempting to add event to database: User={user}, State={state}, TS={event_ts}")
db.session.add(new_event)
db.session.commit()
app.logger.info(f"Successfully recorded event: User={user}, State={state}") # Already added, maybe refine slightly
return jsonify({"success": True}), 201
except SQLAlchemyError as e:
db.session.rollback()
app.logger.error(f"Database error while recording event: {e}")
return jsonify({"success": False, "message": "Database error"}), 500
except Exception as e:
app.logger.error(f"Unexpected error processing report request: {e}") # Refined outer error
return jsonify({"success": False, "message": "Internal server error"}), 500
# --- Helper Functions for Duration Calculation ---
def calculate_duration_sql(time_period):
"""
Generates the core SQL query to calculate working durations.
Uses LEAD() window function to pair 'working' with the next event.
Calculates duration in hours using PostgreSQL functions.
"""
# PostgreSQL date functions (already compatible with the database)
period_grouping = {
'daily': "DATE_TRUNC('day', start_time)",
'weekly': "DATE_TRUNC('week', start_time)", # PostgreSQL week starts Monday
'monthly': "DATE_TRUNC('month', start_time)"
}.get(time_period, "DATE_TRUNC('day', start_time)") # Default to daily if invalid
# Calculate duration using EXTRACT(EPOCH FROM interval) / 3600 for hours
duration_calculation = "EXTRACT(EPOCH FROM (next_event_time - start_time)) / 3600.0"
# Automatically consider a user not working after 6 minutes of inactivity
auto_timeout_seconds = 15 * 60 # 15 minutes in seconds
# Current time to handle dangling "working" states
current_time = datetime.utcnow()
# Use public schema explicitly, ensure proper aggregation by user and period
sql_query = f"""
WITH AllEvents AS (
SELECT
"user",
ts,
state
FROM public.work_events
UNION ALL
-- Add virtual "stopped" events for users who have been inactive for more than 6 minutes
-- but don't have a corresponding "stopped" event
SELECT
w."user",
w.ts + INTERVAL '{auto_timeout_seconds} seconds' AS ts,
'stopped' AS state
FROM public.work_events w
WHERE w.state = 'working'
AND NOT EXISTS (
SELECT 1
FROM public.work_events w2
WHERE w2."user" = w."user"
AND w2.ts > w.ts
AND w2.ts <= w.ts + INTERVAL '{auto_timeout_seconds} seconds'
)
-- Avoid duplicating existing stop events
AND NOT EXISTS (
SELECT 1
FROM public.work_events w3
WHERE w3."user" = w."user"
AND w3.state = 'stopped'
AND w3.ts = w.ts + INTERVAL '{auto_timeout_seconds} seconds'
)
),
EventPairs AS (
SELECT
"user",
ts AS start_time,
state,
LEAD(ts) OVER (PARTITION BY "user" ORDER BY ts) AS next_event_time,
LEAD(state) OVER (PARTITION BY "user" ORDER BY ts) AS next_event_state
FROM AllEvents
ORDER BY "user", ts
),
TimeoutAdjusted AS (
SELECT
"user",
start_time,
state,
CASE
-- If next event is more than 6 minutes away, cap duration at 6 minutes
WHEN state = 'working' AND next_event_time IS NOT NULL
AND EXTRACT(EPOCH FROM (next_event_time - start_time)) > {auto_timeout_seconds}
THEN start_time + INTERVAL '{auto_timeout_seconds} seconds'
ELSE next_event_time
END AS adjusted_end_time,
next_event_state
FROM EventPairs
),
CalculatedDurations AS (
SELECT
"user",
{period_grouping} AS period_start,
SUM(
CASE
WHEN state = 'working' AND adjusted_end_time IS NOT NULL THEN
EXTRACT(EPOCH FROM (adjusted_end_time - start_time)) / 3600.0
ELSE 0 -- Ignore intervals starting with 'stopped' or without a following event
END
) AS total_hours,
MIN(CASE WHEN state = 'working' THEN start_time END) AS first_login_time
FROM TimeoutAdjusted
WHERE state = 'working' -- Only consider intervals that start with 'working'
GROUP BY "user", period_start
)
-- Final aggregation to ensure one row per user per period
SELECT
"user",
period_start,
SUM(total_hours) AS total_hours,
MIN(first_login_time) AS first_login_time
FROM CalculatedDurations
GROUP BY "user", period_start
ORDER BY "user", period_start DESC;
"""
# Add debug logging to see the SQL query
# app.logger.info(f"Generated SQL query: {sql_query}")
return sql_query
def filter_sql_by_user(base_sql, user):
"""Applies a user filter to the SQL query safely."""
# Find the position of the final query part
final_select_pos = base_sql.rfind("SELECT")
if final_select_pos == -1:
return base_sql # Can't find final SELECT, return unchanged
# Find the position of "FROM CalculatedDurations"
from_pos = base_sql.find("FROM CalculatedDurations", final_select_pos)
if from_pos == -1:
return base_sql # Can't find FROM CalculatedDurations, return unchanged
# Find the next line after FROM
next_line_pos = base_sql.find("\n", from_pos)
if next_line_pos == -1:
return base_sql # No newline after FROM, return unchanged
# Check if there's already a WHERE clause
group_by_pos = base_sql.find("GROUP BY", from_pos)
if group_by_pos == -1:
return base_sql # No GROUP BY found, return unchanged
# Insert the user filter with LIKE for partial matching (case insensitive)
where_clause = f" WHERE LOWER(\"user\") LIKE LOWER(:user || '%')\n"
filtered_sql = base_sql[:next_line_pos+1] + where_clause + base_sql[group_by_pos:]
return filtered_sql
def fetch_duration_report(time_period, user_filter=None):
"""Fetches duration report data from the database."""
app.logger.debug(f"Fetching duration report. Period: {time_period}, User: {user_filter}")
sql_query = calculate_duration_sql(time_period)
params = {}
if user_filter:
# Note: filter_sql_by_user modifies the query string directly
sql_query = filter_sql_by_user(sql_query, user_filter)
params['user'] = user_filter
app.logger.debug(f"Applying user filter: {user_filter}")
# Add debug to show database connection info
db_uri = app.config['SQLALCHEMY_DATABASE_URI']
masked_uri = db_uri
if '@' in db_uri:
parts = db_uri.split('@')
masked_uri = "****@" + parts[1]
app.logger.info(f"Executing query using database: {masked_uri}")
try:
# Add a simple count query to verify data in the database
count_query = "SELECT COUNT(*) FROM work_events"
count_result = db.session.execute(text(count_query)).scalar()
app.logger.info(f"Total records in work_events table: {count_result}")
# Add another query to check distinct users
users_query = "SELECT DISTINCT \"user\" FROM work_events"
users_result = db.session.execute(text(users_query)).fetchall()
user_list = [row[0] for row in users_result]
app.logger.info(f"Distinct users in work_events table: {user_list}")
results = db.session.execute(text(sql_query), params).mappings().all()
app.logger.debug(f"Database query executed. Found {len(results)} rows.")
return results
except Exception as e:
app.logger.error(f"Error executing duration report query: {e}")
# Re-raise the exception to be handled by the endpoint's error handler
raise
def fetch_user_activity(username, start_date, end_date):
"""Fetches detailed user activity logs for a specific date range."""
app.logger.debug(f"Fetching activity logs for user: {username}, from: {start_date}, to: {end_date}")
# Automatically consider a user not working after 10 minutes of inactivity
auto_timeout_seconds = 10 * 60 # 10 minutes in seconds
# SQL query to match working and stopped pairs and calculate durations
# With added logic to cap any session at 10 minutes if no explicit stop
sql_query = f"""
WITH AllEvents AS (
SELECT
"user",
ts,
state
FROM work_events
WHERE "user" = :username
AND DATE(ts) BETWEEN :start_date AND :end_date
UNION ALL
-- Add virtual "stopped" events for sessions without explicit stops
SELECT
w."user",
w.ts + INTERVAL '{auto_timeout_seconds} seconds' AS ts,
'stopped' AS state
FROM work_events w
WHERE w."user" = :username
AND DATE(w.ts) BETWEEN :start_date AND :end_date
AND w.state = 'working'
AND NOT EXISTS (
SELECT 1
FROM work_events w2
WHERE w2."user" = w."user"
AND w2.ts > w.ts
AND w2.ts <= w.ts + INTERVAL '{auto_timeout_seconds} seconds'
)
-- Avoid duplicating existing stop events
AND NOT EXISTS (
SELECT 1
FROM work_events w3
WHERE w3."user" = w."user"
AND w3.state = 'stopped'
AND w3.ts = w.ts + INTERVAL '{auto_timeout_seconds} seconds'
)
),
EventPairs AS (
SELECT
w1."user",
DATE(w1.ts) AS work_date,
w1.ts AS start_time,
w2.ts AS end_time,
EXTRACT(EPOCH FROM (w2.ts - w1.ts)) AS session_duration_seconds
FROM
AllEvents w1
JOIN
AllEvents w2 ON w1."user" = w2."user"
AND w1.state = 'working'
AND w2.state = 'stopped'
AND w2.ts > w1.ts
AND NOT EXISTS (
SELECT 1 FROM AllEvents w3
WHERE w3."user" = w1."user"
AND w3.ts > w1.ts AND w3.ts < w2.ts
)
ORDER BY
w1.ts
),
TimeoutAdjusted AS (
SELECT
"user",
work_date,
start_time,
CASE
-- If session is longer than 10 minutes with no explicit stop, cap at 10 minutes
WHEN session_duration_seconds > {auto_timeout_seconds}
THEN start_time + INTERVAL '{auto_timeout_seconds} seconds'
ELSE end_time
END AS adjusted_end_time,
CASE
WHEN session_duration_seconds > {auto_timeout_seconds}
THEN {auto_timeout_seconds} / 3600.0
ELSE session_duration_seconds / 3600.0
END AS session_duration_hours
FROM EventPairs
)
SELECT * FROM TimeoutAdjusted
ORDER BY start_time
"""
try:
params = {
'username': username,
'start_date': start_date,
'end_date': end_date
}
results = db.session.execute(text(sql_query), params).mappings().all()
app.logger.debug(f"User activity query executed. Found {len(results)} rows.")
return results
except Exception as e:
app.logger.error(f"Error executing user activity query: {e}")
raise
def format_report_data(results, time_period):
"""Formats the raw database results into a list of dictionaries for the API."""
app.logger.debug(f"Formatting report data for period: {time_period}. Input rows: {len(results)}")
period_key_map = {
'daily': 'day',
'weekly': 'week_start',
'monthly': 'month_start'
}
period_key = period_key_map.get(time_period, 'period_start') # Default if unexpected period
# First convert raw rows to dictionaries
raw_data = []
for row in results:
# Ensure period_start is converted to string if it's a date/datetime object
period_value = row['period_start']
if hasattr(period_value, 'isoformat'):
period_value = period_value.isoformat()
# Format first_login_time
first_login_time = row['first_login_time']
if hasattr(first_login_time, 'isoformat'):
first_login_time = first_login_time.isoformat()
# Ensure duration_hours is a float, not a string or Decimal
duration_hours = row['total_hours']
if duration_hours is None:
duration_hours = 0.0
else:
# Convert to float explicitly to ensure it's JSON serializable as a number
duration_hours = float(duration_hours)
raw_data.append({
'user': row['user'],
period_key: period_value,
'duration_hours': duration_hours,
'first_login_time': first_login_time
})
# Additional preprocessing to consolidate any duplicate user entries
user_period_map = {}
for entry in raw_data:
user = entry['user']
period = entry[period_key]
key = f"{user}_{period}"
if key in user_period_map:
# Aggregate duration for existing user+period
user_period_map[key]['duration_hours'] += entry['duration_hours']
# Use the earliest first_login_time
existing_time = user_period_map[key]['first_login_time']
new_time = entry['first_login_time']
if existing_time and new_time:
if new_time < existing_time:
user_period_map[key]['first_login_time'] = new_time
else:
# New user+period combination
user_period_map[key] = entry
# Convert consolidated map back to list
formatted_data = list(user_period_map.values())
app.logger.debug(f"Formatted report data created. Output rows: {len(formatted_data)}")
return formatted_data
def format_user_activity(results):
"""Formats the raw user activity results into a list of dictionaries."""
formatted_data = []
for row in results:
start_time = row['start_time']
end_time = row['adjusted_end_time'] # Now using the timeout-adjusted end time
# Format timestamps for display
if hasattr(start_time, 'isoformat'):
start_time = start_time.isoformat()
if hasattr(end_time, 'isoformat'):
end_time = end_time.isoformat()
# Format duration as float
duration = float(row['session_duration_hours']) if row['session_duration_hours'] is not None else 0.0
formatted_data.append({
'date': row['work_date'].isoformat() if hasattr(row['work_date'], 'isoformat') else str(row['work_date']),
'start_time': start_time,
'end_time': end_time,
'duration_hours': round(duration, 2)
})
return formatted_data
# --- Reporting Endpoints ---
# Reporting endpoints (basic implementation)
@app.route('/api/reports/daily', methods=['GET'])
def get_daily_report():
app.logger.info("Daily report API requested.")
try:
app.logger.info("Fetching daily report data...")
# Get date parameter or use today as default
selected_date = request.args.get('date')
if selected_date:
app.logger.info(f"Using selected date: {selected_date}")
else:
selected_date = datetime.now().strftime('%Y-%m-%d')
app.logger.info(f"No date provided, using today: {selected_date}")
user_filter = request.args.get('user')
# Get regular daily report results
results = fetch_duration_report('daily', user_filter)
# Filter to only include entries for the selected date
filtered_results = []
for row in results:
row_date = row['period_start']
if hasattr(row_date, 'isoformat'):
row_date = row_date.isoformat()
# Check if the row's date matches the selected date
if row_date and row_date.startswith(selected_date):
filtered_results.append(row)
# Add debug logging for usernames in raw results
app.logger.info(f"Raw results usernames for date {selected_date}: {[r['user'] for r in filtered_results]}")
report = format_report_data(filtered_results, 'daily')
# Add debug logging for usernames in formatted data
app.logger.info(f"Formatted data usernames: {[r['user'] for r in report]}")
app.logger.info(f"Successfully generated daily report for date: {selected_date}, user: {request.args.get('user', 'All')}. Found {len(filtered_results)} records.")
return jsonify({"success": True, "data": report})
except Exception as e:
app.logger.error(f"Error generating daily report: {e}")
return jsonify({"success": False, "message": "Error generating report"}), 500
@app.route('/api/reports/weekly', methods=['GET'])
def get_weekly_report():
app.logger.info("Weekly report API requested.")
try:
app.logger.info("Fetching weekly report data...")
# Check if a specific day within the week was requested
day_filter = request.args.get('day')
user_filter = request.args.get('user')
if day_filter:
app.logger.info(f"Filtering weekly report for specific day: {day_filter}")
# Use daily query with the specific date
results = fetch_duration_report('daily', user_filter)
# Filter results to only include the requested day
filtered_results = []
for row in results:
row_date = row['period_start']
if hasattr(row_date, 'isoformat'):
row_date = row_date.isoformat()
# Check if the row's date matches the requested day
if row_date and row_date.startswith(day_filter):
filtered_results.append(row)
results = filtered_results
else:
# Get current week dates for filtering
now = datetime.now()
# Get Monday of current week
current_week_start = now - timedelta(days=now.weekday())
current_week_start = current_week_start.replace(hour=0, minute=0, second=0, microsecond=0)
# Regular weekly report (whole week)
results = fetch_duration_report('weekly', user_filter)
# Filter to just include current week and aggregate by user
filtered_results = []
user_aggregated = {}
for row in results:
row_date = row['period_start']
# Convert to datetime if it's a string
if isinstance(row_date, str):
try:
row_date = datetime.fromisoformat(row_date.replace('Z', '+00:00'))
except ValueError:
continue
# Check if it's from current week
if row_date and row_date.date() == current_week_start.date():
username = row['user']
if username in user_aggregated:
# Add duration hours
user_aggregated[username]['total_hours'] += row['total_hours']
# Keep earliest first_login_time
if row['first_login_time'] and user_aggregated[username]['first_login_time']:
row_login = row['first_login_time']
if isinstance(row_login, str):
try:
row_login = datetime.fromisoformat(row_login.replace('Z', '+00:00'))
except ValueError:
row_login = None
existing_login = user_aggregated[username]['first_login_time']
if isinstance(existing_login, str):
try:
existing_login = datetime.fromisoformat(existing_login.replace('Z', '+00:00'))
except ValueError:
existing_login = None
if row_login and existing_login and row_login < existing_login:
user_aggregated[username]['first_login_time'] = row['first_login_time']
else:
# First entry for this user
user_aggregated[username] = {
'user': username,
'period_start': row_date,
'total_hours': row['total_hours'],
'first_login_time': row['first_login_time']
}
# Convert aggregated dict back to list
filtered_results = list(user_aggregated.values())
results = filtered_results if filtered_results else results
# Add debug logging for usernames in raw results
app.logger.info(f"Raw results usernames: {[r['user'] for r in results]}")
report = format_report_data(results, 'weekly')
# Add debug logging for usernames in formatted data
app.logger.info(f"Formatted data usernames: {[r['user'] for r in report]}")
app.logger.info(f"Successfully generated weekly report for user: {request.args.get('user', 'All')}. Found {len(results)} records.")
return jsonify({"success": True, "data": report})
except Exception as e:
app.logger.error(f"Error generating weekly report: {e}")
return jsonify({"success": False, "message": "Error generating report"}), 500
@app.route('/api/reports/monthly', methods=['GET'])
def get_monthly_report():
app.logger.info("Monthly report API requested.")
try:
app.logger.info("Fetching monthly report data...")
# Get current month for filtering
now = datetime.now()
current_month_start = now.replace(day=1, hour=0, minute=0, second=0, microsecond=0)
# Get regular monthly report
results = fetch_duration_report('monthly', request.args.get('user'))
# Filter to only include current month and aggregate by user
filtered_results = []
user_aggregated = {}
for row in results:
row_date = row['period_start']
# Convert to datetime if it's a string
if isinstance(row_date, str):
try:
row_date = datetime.fromisoformat(row_date.replace('Z', '+00:00'))
except ValueError:
continue
# Check if it's from current month
if row_date and row_date.year == current_month_start.year and row_date.month == current_month_start.month:
username = row['user']
if username in user_aggregated:
# Add duration hours
user_aggregated[username]['total_hours'] += row['total_hours']
else:
# First entry for this user
user_aggregated[username] = {
'user': username,
'period_start': row_date,
'total_hours': row['total_hours'],
'first_login_time': row['first_login_time']
}
# Convert aggregated dict back to list
filtered_results = list(user_aggregated.values())
results = filtered_results if filtered_results else results
# Add debug logging for usernames in raw results
app.logger.info(f"Raw results usernames: {[r['user'] for r in results]}")
report = format_report_data(results, 'monthly')
# Add debug logging for usernames in formatted data
app.logger.info(f"Formatted data usernames: {[r['user'] for r in report]}")
app.logger.info(f"Successfully generated monthly report for user: {request.args.get('user', 'All')}. Found {len(results)} records.")
return jsonify({"success": True, "data": report})
except Exception as e:
app.logger.error(f"Error generating monthly report: {e}")
return jsonify({"success": False, "message": "Error generating report"}), 500
@app.route('/api/user-activity/<username>', methods=['GET'])
def get_user_activity(username):
"""Gets detailed activity logs for a specific user."""
app.logger.info(f"User activity logs requested for: {username}")
# Get date range from query parameters, default to current day if not provided
start_date = request.args.get('start_date', datetime.now().strftime('%Y-%m-%d'))
end_date = request.args.get('end_date', start_date)
try:
app.logger.info(f"Fetching activity logs for user: {username}, from: {start_date}, to: {end_date}")
results = fetch_user_activity(username, start_date, end_date)
activity_logs = format_user_activity(results)
app.logger.info(f"Successfully retrieved {len(activity_logs)} activity records for user: {username}")
return jsonify({
"success": True,
"data": {
"username": username,
"start_date": start_date,
"end_date": end_date,
"activities": activity_logs
}
})
except Exception as e:
app.logger.error(f"Error retrieving user activity logs: {e}")
return jsonify({"success": False, "message": "Error retrieving activity logs"}), 500
# Error handlers
@app.errorhandler(400)
def bad_request(error):
return jsonify({
'success': False,
'message': 'Bad request'
}), 400
@app.errorhandler(404)
def not_found_error(error):
app.logger.warning(f"404 Not Found error triggered for URL: {request.url}")
return jsonify({"success": False, "message": "Resource not found"}), 404
@app.errorhandler(405)
def method_not_allowed(error):
return jsonify({
'success': False,
'message': 'Method not allowed'
}), 405
@app.errorhandler(500)
def internal_error(error):
# Note: The specific error causing the 500 might have already been logged
app.logger.error(f"Global 500 Internal Server error handler triggered: {error}")
return jsonify({"success": False, "message": "Internal server error"}), 500
# Simple dashboard (optional)
@app.route('/')
def dashboard():
app.logger.info("Dashboard page requested.")
# Add direct query to verify data
try:
# Direct query to list all distinct users
direct_query = "SELECT DISTINCT \"user\" FROM work_events"
direct_results = db.session.execute(text(direct_query)).fetchall()
user_list = [row[0] for row in direct_results]
app.logger.info(f"DIRECT QUERY - Distinct users in database: {user_list}")
# Direct query to count records
count_query = "SELECT COUNT(*) FROM work_events"
count_result = db.session.execute(text(count_query)).scalar()
app.logger.info(f"DIRECT QUERY - Total records in work_events: {count_result}")
# Get first few records to inspect
sample_query = "SELECT id, \"user\", state, ts FROM work_events LIMIT 5"
sample_results = db.session.execute(text(sample_query)).fetchall()
app.logger.info(f"DIRECT QUERY - Sample records: {sample_results}")
# Check the current schema name
schema_query = "SELECT current_schema()"
schema_result = db.session.execute(text(schema_query)).scalar()
app.logger.info(f"DIRECT QUERY - Current schema: {schema_result}")
# List all schemas in the database
schemas_query = "SELECT schema_name FROM information_schema.schemata"
schemas_results = db.session.execute(text(schemas_query)).fetchall()
schema_list = [row[0] for row in schemas_results]
app.logger.info(f"DIRECT QUERY - Available schemas: {schema_list}")
# List all tables in the public schema
tables_query = "SELECT table_name FROM information_schema.tables WHERE table_schema = 'public'"
tables_results = db.session.execute(text(tables_query)).fetchall()
table_list = [row[0] for row in tables_results]
app.logger.info(f"DIRECT QUERY - Tables in public schema: {table_list}")
except Exception as e:
app.logger.error(f"Error during direct database query debugging: {str(e)}")
return render_template('dashboard.html')
# Create the database tables
def init_db():
with app.app_context():
# Log the full database URI (with sensitive info removed)
db_uri = app.config['SQLALCHEMY_DATABASE_URI']
if 'postgresql' in db_uri:
# Mask password if using PostgreSQL
masked_uri = db_uri.replace(db_uri.split('@')[0], 'postgresql://****:****')
app.logger.info(f"Using database URI: {masked_uri}")
# Initialize PostgreSQL-specific components
app.logger.info("Detected PostgreSQL database")
# Check for extensions (can add more as needed)
try:
db.session.execute(text("SELECT 1 FROM pg_extension WHERE extname = 'pgcrypto'"))
app.logger.info("PostgreSQL database version check completed")
except Exception as e:
app.logger.warning(f"PostgreSQL extension check failed: {e}")
else:
app.logger.info(f"Using database URI: {db_uri}")
# For SQLite, ensure parent directory exists and is writable
if db_uri and db_uri.startswith('sqlite:///'):
db_file = db_uri.replace('sqlite:///', '')
db_dir = os.path.dirname(db_file)
app.logger.info(f"SQLite database file path: {db_file}")
app.logger.info(f"SQLite database directory exists: {os.path.exists(db_dir)}")
if os.path.exists(db_dir):
app.logger.info(f"SQLite database directory writable: {os.access(db_dir, os.W_OK)}")
# Create the tables
db.create_all()
app.logger.info("Database initialized")
if __name__ == '__main__':
print("Starting application...")
# Create database tables if they don't exist
print(f"Instance path: {app.instance_path}")
instance_exists = os.path.exists(app.instance_path)
print(f"Instance path exists: {instance_exists}")
# Make sure the instance directory exists
if not instance_exists:
try:
os.makedirs(app.instance_path)
print(f"Created instance directory: {app.instance_path}")
except Exception as e:
print(f"Error creating instance directory: {e}")
# Check instance directory permissions
has_write_access = os.access(app.instance_path, os.W_OK)
print(f"Instance path write access: {has_write_access}")
# Print database configuration
db_uri = app.config['SQLALCHEMY_DATABASE_URI']
print(f"Database URI: {db_uri}")
# For SQLite, print more details about the database file
if db_uri.startswith('sqlite:///'):
db_file = db_uri.replace('sqlite:///', '')
print(f"Database file path: {db_file}")
db_dir = os.path.dirname(db_file)
print(f"Database directory: {db_dir}")
print(f"Database directory exists: {os.path.exists(db_dir)}")
print(f"Database directory writable: {os.access(db_dir, os.W_OK)}")
print(f"Database file exists: {os.path.exists(db_file)}")
if os.path.exists(db_file):
print(f"Database file writable: {os.access(db_file, os.W_OK)}")
# Use try/except to catch and log any initialization errors
try:
print("Initializing database...")
init_db() # Ensure database and tables are created on first run
print("Database initialization successful")
except Exception as e:
print(f"Error during database initialization: {e}")
import traceback
traceback.print_exc()
# Check if database file was created (for SQLite)
if db_uri.startswith('sqlite:///'):
db_file = db_uri.replace('sqlite:///', '')
print(f"After init: Database file exists: {os.path.exists(db_file)}")
# Run the Flask application
host = os.environ.get('HOST', '0.0.0.0')
port = int(os.environ.get('PORT', 5000))
debug = os.environ.get('DEBUG', 'False').lower() == 'true'
print(f"Starting Flask application on {host}:{port} (debug={debug})")
app.run(host=host, port=port, debug=debug)

157
app/__init__.py Normal file
View File

@ -0,0 +1,157 @@
"""
Employee Workstation Activity Tracking - Flask Application Factory
This module provides the application factory function 'create_app' that initializes
the Flask application with its configuration, database connection, and registered blueprints.
"""
import logging
import os
from logging.handlers import RotatingFileHandler
from dotenv import load_dotenv
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy import event
# Initialize SQLAlchemy globally to avoid circular imports
db = SQLAlchemy()
def create_app(test_config=None):
"""Create and configure the Flask application using the factory pattern."""
# Load environment variables
load_dotenv() # Try .env first
config_env_path = os.path.join(
os.path.dirname(os.path.dirname(__file__)), "config.env"
)
# Defer logging for config.env until logger is configured
config_env_loaded_message = None
config_env_warning_message = None
if os.path.exists(config_env_path):
load_dotenv(config_env_path)
config_env_loaded_message = (
f"Loaded environment variables from {config_env_path}"
)
else:
config_env_warning_message = (
f"Warning: config.env file not found at {config_env_path}"
)
# Get the project root directory (parent of app directory)
project_root = os.path.dirname(os.path.dirname(__file__))
# Create and configure the app with template folder in project root
app = Flask(
__name__,
instance_relative_config=True,
template_folder=os.path.join(project_root, "templates"),
static_folder=os.path.join(project_root, "static"),
)
# Default configuration
app.config.from_mapping(
SECRET_KEY=os.environ.get("SECRET_KEY", "dev"),
SQLALCHEMY_DATABASE_URI=os.environ.get(
"DATABASE_URI", os.environ.get("DATABASE_URL")
),
SQLALCHEMY_TRACK_MODIFICATIONS=False,
)
# Override configuration with test config if provided
if test_config is not None:
app.config.update(test_config)
# Ensure DATABASE_URI is set
if not app.config["SQLALCHEMY_DATABASE_URI"]:
raise ValueError(
"DATABASE_URI or DATABASE_URL environment variable must be set"
)
# Ensure the instance folder exists
try:
os.makedirs(app.instance_path)
except OSError:
pass
# Configure logging
if not app.debug: # Avoid duplicate logs in debug mode
log_formatter = logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")
log_handler = RotatingFileHandler(
os.path.join(app.instance_path, "server.log"),
maxBytes=1024 * 1024 * 5, # 5 MB
backupCount=5,
)
log_handler.setFormatter(log_formatter)
log_handler.setLevel(logging.INFO)
app.logger.addHandler(log_handler)
app.logger.setLevel(logging.INFO)
# Log config.env messages now that logger is available
if config_env_loaded_message:
app.logger.info(config_env_loaded_message)
if config_env_warning_message:
app.logger.warning(config_env_warning_message)
app.logger.info("Flask application starting up...")
# Initialize the database with the app
db.init_app(app)
# Set up event listener for PostgreSQL timezone
with app.app_context():
if "postgresql" in app.config["SQLALCHEMY_DATABASE_URI"]:
@event.listens_for(db.engine, "connect")
def set_timezone(dbapi_connection, connection_record):
cursor = dbapi_connection.cursor()
cursor.execute("SET timezone TO 'Asia/Dubai';")
cursor.close()
app.logger.info("Set PostgreSQL session timezone to 'Asia/Dubai' (UTC+4)")
app.logger.info("Note: Application uses UTC internally, database is set to Asia/Dubai time.")
# Import and register blueprints
from app.api import events_bp, reports_bp
app.register_blueprint(events_bp)
app.register_blueprint(reports_bp)
from app.views.dashboard import views_bp
app.register_blueprint(views_bp)
# Register error handlers
from app.errors import register_error_handlers
register_error_handlers(app)
# Initialize database tables when in development
@app.cli.command("init-db")
def init_db_command():
"""Clear existing data and create new tables."""
with app.app_context():
db.create_all()
app.logger.info("Database tables created")
@app.route("/healthcheck")
def healthcheck():
return {"status": "ok"}, 200
# Initialize and start the scheduler
from app.scheduler import init_scheduler
init_scheduler(app)
# Register custom CLI commands
from app.cli import register_cli_commands
register_cli_commands(app)
return app
def init_db(app_instance):
"""Initialize the database outside of the CLI context using a provided app
instance."""
# app = create_app() # No longer creating app here, using provided instance
with app_instance.app_context():
db.create_all()
app_instance.logger.info("Database initialized via init_db function")

10
app/api/__init__.py Normal file
View File

@ -0,0 +1,10 @@
"""
API package for employee workstation activity tracking.
This package contains the API endpoints for reporting events and retrieving data.
"""
from app.api.events import events_bp
from app.api.reports import reports_bp
__all__ = ["events_bp", "reports_bp"]

212
app/api/events.py Normal file
View File

@ -0,0 +1,212 @@
"""
API endpoints for reporting user activity events.
This module provides endpoints for clients to report state changes (working/stopped).
"""
from datetime import datetime, timezone # Removed timedelta, timezone
from flask import Blueprint, current_app, jsonify, request
from sqlalchemy.exc import SQLAlchemyError
from app import db
from app.models import WorkEvent
# Create a blueprint for event-related API endpoints
events_bp = Blueprint("events", __name__, url_prefix="/api")
@events_bp.route("/report", methods=["POST"])
def report_event():
"""
Endpoint for clients to report activity state changes.
All timestamps are expected to be in UTC if provided by the client,
and will be stored as UTC (or DB equivalent default).
Expected JSON payload:
{
"user": "username",
"state": "working|stopped",
"ts": "2023-07-08T12:30:45Z" (optional, ISO 8601 UTC)
}
"""
data = request.get_json()
current_app.logger.info(f"Received report request: {data}")
if not data or "user" not in data or "state" not in data:
current_app.logger.warning(
"Invalid report request payload: Missing required fields."
)
return (
jsonify(
{"success": False, "message": "Missing required fields: user, state"}
),
400,
)
if data["state"] not in ["working", "stopped"]:
current_app.logger.warning(
f"Invalid state value '{data['state']}' in report request."
)
return (
jsonify(
{
"success": False,
"message": 'Invalid state value. Must be "working" or "stopped"',
}
),
400,
)
user = data["user"]
state = data["state"]
ts_str = data.get("ts")
event_ts = None
if ts_str:
try:
# Ensure 'Z' is replaced with +00:00 for full ISO compatibility if needed by fromisoformat version
if ts_str.endswith("Z"):
ts_str = ts_str[:-1] + "+00:00"
event_ts = datetime.fromisoformat(ts_str)
# Ensure the parsed timestamp is UTC. If it has an offset, convert to UTC.
if (
event_ts.tzinfo is not None
and event_ts.tzinfo.utcoffset(event_ts) is not None
):
event_ts = event_ts.astimezone(timezone.utc)
else:
# If timezone naive, assume UTC as per API contract (or handle as error)
# For this implementation, we'll assume naive means UTC if client sends it.
# Alternatively, enforce client sends tz-aware string, or reject naive.
current_app.logger.debug(
f"Received naive timestamp {ts_str}, assuming UTC."
)
current_app.logger.info(
f"Using client-provided timestamp (UTC): {event_ts}"
)
except ValueError:
current_app.logger.warning(
f"Invalid timestamp format received: {ts_str}. Returning error."
)
return (
jsonify(
{
"success": False,
"message": "Invalid timestamp format. Please use ISO 8601 UTC (e.g., YYYY-MM-DDTHH:MM:SSZ).",
}
),
400,
)
else:
event_ts = datetime.utcnow()
current_app.logger.info(
f"No client timestamp provided, using current UTC time: {event_ts}"
)
# At this point, event_ts should be an aware UTC datetime object or naive UTC to be stored.
# If database stores naive timestamps and assumes UTC (common for SQLite), ensure event_ts is naive UTC.
# If database stores aware timestamps (common for PostgreSQL with timestamptz), event_ts should be aware UTC.
#
# IMPORTANT NOTE ABOUT TIMEZONE HANDLING:
# - Client-side scripts send timestamps in UTC (with Z suffix in ISO format)
# - Backend processes all timestamps in UTC format
# - PostgreSQL database session is set to Asia/Dubai timezone (UTC+4)
# - When storing timestamps, PostgreSQL converts from UTC to Asia/Dubai
# - When retrieving timestamps, PostgreSQL returns in Asia/Dubai timezone
# - The frontend displays all times in GMT+4 (Asia/Dubai) using formatTimeToGMT4()
#
# This ensures consistent timezone handling throughout the application stack.
current_app.logger.debug(f"Storing event with timestamp (UTC): {event_ts}")
new_event = WorkEvent(user=user, state=state, ts=event_ts)
try:
current_app.logger.info(
f"Attempting to add event to database: User={user}, State={state}, TS={event_ts}"
)
db.session.add(new_event)
db.session.commit()
current_app.logger.info(
f"Successfully recorded event: User={user}, State={state}"
)
return jsonify({"success": True}), 201
except SQLAlchemyError as e:
db.session.rollback()
current_app.logger.error(f"Database error while recording event: {e}")
return jsonify({"success": False, "message": "Database error"}), 500
@events_bp.route("/user_status_update", methods=["POST"])
def update_user_status():
"""
Endpoint for updating a user's status, typically when a timeout occurs.
Expects JSON: {"user_id": "username", "status": "not working"}
This will result in a 'stopped' event being logged for the user.
"""
data = request.get_json()
current_app.logger.info(f"Received user status update request: {data}")
if not data or "user_id" not in data or "status" not in data:
current_app.logger.warning(
"Invalid user status update payload: Missing required fields."
)
return (
jsonify(
{
"success": False,
"message": "Missing required fields: user_id, status",
}
),
400,
)
user_id = data["user_id"]
status = data["status"]
if status != "not working":
current_app.logger.warning(
f"Invalid status value '{status}' in user status update for user {user_id}. Expected 'not working'."
)
return (
jsonify(
{
"success": False,
"message": 'Invalid status value. Must be "not working"',
}
),
400,
)
# Map "not working" to "stopped" for WorkEvent consistency
event_state = "stopped"
event_ts = datetime.utcnow()
new_event = WorkEvent(user=user_id, state=event_state, ts=event_ts)
try:
current_app.logger.info(
f"Attempting to add '{event_state}' event to database for user {user_id} from status update."
)
db.session.add(new_event)
db.session.commit()
current_app.logger.info(
f"Successfully recorded '{event_state}' event for user {user_id} from status update."
)
return (
jsonify(
{
"success": True,
"message": "User status successfully updated to stopped.",
}
),
201,
)
except SQLAlchemyError as e:
db.session.rollback()
current_app.logger.error(
f"Database error while recording '{event_state}' event for user {user_id} from status update: {e}"
)
return (
jsonify({"success": False, "message": "Database error processing request"}),
500,
)

615
app/api/reports.py Normal file
View File

@ -0,0 +1,615 @@
"""
API endpoints for retrieving activity reports.
This module provides endpoints for retrieving daily, weekly, and monthly reports,
as well as detailed user activity logs.
"""
import requests # Added import for requests
from datetime import datetime, timedelta # Removed timezone
from flask import Blueprint, current_app, jsonify, request
from sqlalchemy import text
from sqlalchemy.exc import SQLAlchemyError
from sqlalchemy.sql import func, and_
from app import db
from app.utils.formatting import (
format_report_data,
format_user_activity,
) # Added format_user_activity
from app.utils.queries import calculate_duration_sql
from app.models import UserRealWorkSummary, WorkEvent
# Create a blueprint for report-related API endpoints
reports_bp = Blueprint("reports", __name__, url_prefix="/api")
def fetch_duration_report(
time_period, user_filter=None, start_date=None, end_date=None
):
"""
Fetches duration report data from the database.
Args:
time_period (str): Time period to group by ('daily', 'weekly', or 'monthly')
user_filter (str, optional): Username to filter results by.
start_date (str, optional): Start date for filtering (YYYY-MM-DD).
end_date (str, optional): End date for filtering (YYYY-MM-DD).
Returns:
list: List of report data rows
"""
current_app.logger.debug(
f"Fetching duration report. Period: {time_period}, User: {user_filter}, Start: {start_date}, End: {end_date}"
)
# Get SQL query and parameters from the refactored function
sql_query, params = calculate_duration_sql(
time_period,
user_filter=user_filter,
start_date_filter=start_date,
end_date_filter=end_date,
)
# Debugging for database connection URI (can be made less verbose or conditional)
db_uri = current_app.config["SQLALCHEMY_DATABASE_URI"]
masked_uri = db_uri
if "@" in db_uri:
parts = db_uri.split("@")
masked_uri = "****@" + parts[1]
current_app.logger.info(f"Executing query using database: {masked_uri}")
# current_app.logger.debug(f"Query: {sql_query}") # Optional: log full query
# current_app.logger.debug(f"Params: {params}") # Optional: log params
try:
# Example diagnostic queries (can be removed or made conditional for production)
# count_query = "SELECT COUNT(*) FROM work_events"
# count_result = db.session.execute(text(count_query)).scalar()
# current_app.logger.info(f"Total records in work_events table: {count_result}")
# users_query = "SELECT DISTINCT \"user\" FROM work_events"
# users_result = db.session.execute(text(users_query)).fetchall()
# user_list = [row[0] for row in users_result]
# current_app.logger.info(f"Distinct users in work_events table: {user_list}")
results = db.session.execute(text(sql_query), params).mappings().all()
current_app.logger.debug(
f"Database query executed for {time_period} report. Found {len(results)} rows."
)
return results
except SQLAlchemyError as e:
current_app.logger.error(
f"Error executing duration report query (period: {time_period}): {e}"
)
raise
def fetch_user_activity(username, start_date, end_date):
"""
Fetches detailed user activity logs for a specific date range.
Args:
username (str): Username to fetch activity for
start_date (str): Start date in YYYY-MM-DD format
end_date (str): End date in YYYY-MM-DD format
Returns:
list: List of user activity rows
"""
current_app.logger.debug(
f"Fetching activity logs for user: {username}, from: {start_date}, to: {end_date}"
)
# SQL query to match working and stopped pairs and calculate durations
sql_query = """
WITH EventPairs AS (
SELECT
w1."user",
DATE(w1.ts) AS work_date,
w1.ts AS start_time,
w2.ts AS end_time,
EXTRACT(EPOCH FROM (w2.ts - w1.ts))/3600 AS session_duration_hours
FROM
work_events w1
JOIN
work_events w2 ON w1."user" = w2."user"
AND w1.state = 'working'
AND w2.state = 'stopped'
AND w2.ts > w1.ts
AND NOT EXISTS (
SELECT 1 FROM work_events w3
WHERE w3."user" = w1."user"
AND w3.ts > w1.ts AND w3.ts < w2.ts
)
WHERE
w1."user" = :username
AND DATE(w1.ts) BETWEEN :start_date AND :end_date
ORDER BY
w1.ts
)
SELECT * FROM EventPairs
"""
try:
params = {"username": username, "start_date": start_date, "end_date": end_date}
results = db.session.execute(text(sql_query), params).mappings().all()
current_app.logger.debug(
f"User activity query executed. Found {len(results)} rows."
)
return results
except SQLAlchemyError as e:
current_app.logger.error(f"Error executing user activity query: {e}")
raise
@reports_bp.route("/reports/daily", methods=["GET"])
def get_daily_report():
"""
Endpoint for retrieving daily report data.
All dates are processed in UTC.
Query Parameters:
user (str, optional): Filter results by username
date (str, optional): Specific date in YYYY-MM-DD format (UTC)
"""
current_app.logger.info("Daily report API requested.")
try:
user_filter = request.args.get("user")
selected_date_str = request.args.get(
"date", datetime.utcnow().strftime("%Y-%m-%d")
)
current_app.logger.info(
f"Fetching daily report data for date: {selected_date_str}, user: {user_filter or 'All'}"
)
# fetch_duration_report now handles date filtering via calculate_duration_sql
results = fetch_duration_report(
time_period="daily",
user_filter=user_filter,
start_date=selected_date_str, # Pass the selected date as both start and end
end_date=selected_date_str, # for precise daily filtering by the SQL query.
)
# The SQL query is now expected to return data only for the selected_date_str for 'daily' period.
# No more manual filtering needed here.
# filtered_results = []
# for row in results:
# row_date = row['period_start']
# if hasattr(row_date, 'isoformat'):
# row_date = row_date.isoformat()
# if row_date and row_date.startswith(selected_date_str):
# filtered_results.append(row)
# current_app.logger.info(f"Raw results usernames for date {selected_date_str}: {[r['user'] for r in results]}")
report = format_report_data(results, "daily") # Pass results directly
# current_app.logger.info(f"Formatted data usernames: {[r['user'] for r in report]}")
current_app.logger.info(
f"Successfully generated daily report for date: {selected_date_str}, user: {user_filter or 'All'}. Found {len(results)} records."
)
return jsonify({"success": True, "data": report})
except SQLAlchemyError as e:
current_app.logger.error(f"Database error generating daily report: {e}")
return jsonify({"success": False, "message": "Database error generating report"}), 500
except Exception as e:
current_app.logger.exception(f"Unexpected error generating daily report: {e}")
return jsonify({"success": False, "message": "Error generating report"}), 500
@reports_bp.route("/reports/weekly", methods=["GET"])
def get_weekly_report():
"""
Endpoint for retrieving weekly report data.
All dates are processed in UTC.
Query Parameters:
user (str, optional): Filter results by username
day (str, optional): Specific day in YYYY-MM-DD format (UTC) for single day view from week
If not provided, defaults to the current week.
"""
current_app.logger.info("Weekly report API requested.")
try:
user_filter = request.args.get("user")
day_filter_str = request.args.get("day")
start_date_param = None
end_date_param = None
report_period_type = "weekly" # For format_report_data
if day_filter_str:
current_app.logger.info(
f"Filtering weekly report for specific day: {day_filter_str}, user: {user_filter or 'All'}"
)
# Fetch as a daily report for that specific day
start_date_param = day_filter_str
end_date_param = day_filter_str
report_period_type = "daily" # Format as daily if single day is requested
results = fetch_duration_report(
"daily",
user_filter,
start_date=start_date_param,
end_date=end_date_param,
)
else:
current_app.logger.info(
f"Fetching weekly report for current week, user: {user_filter or 'All'}"
)
# Calculate start and end of the current UTC week (Mon-Sun)
now_utc = datetime.utcnow()
current_week_start_utc = now_utc - timedelta(
days=now_utc.weekday()
) # Monday
current_week_end_utc = current_week_start_utc + timedelta(days=6) # Sunday
start_date_param = current_week_start_utc.strftime("%Y-%m-%d")
end_date_param = current_week_end_utc.strftime("%Y-%m-%d")
current_app.logger.info(
f"Current UTC week: {start_date_param} to {end_date_param}"
)
results = fetch_duration_report(
time_period="weekly", # The SQL will group by week_start using DATE_TRUNC('week',...)
user_filter=user_filter,
start_date=start_date_param, # Filters events within this week
end_date=end_date_param,
)
# The SQL query `calculate_duration_sql` for 'weekly' period_grouping does:
# GROUP BY "user", DATE_TRUNC('week', start_time)
# So, if date filters are applied correctly to the events, this should return
# one row per user for the week specified by start_date/end_date,
# with period_start being the Monday of that week.
# The previous complex Python aggregation for current week might no longer be needed
# if the SQL correctly aggregates for the given date range.
# current_app.logger.info(f"Raw results usernames: {[r['user'] for r in results]}")
report = format_report_data(results, report_period_type)
# current_app.logger.info(f"Formatted data usernames: {[r['user'] for r in report]}")
log_message_suffix = (
f"day {day_filter_str}"
if day_filter_str
else f"week {start_date_param}-{end_date_param}"
)
current_app.logger.info(
f"Successfully generated weekly report for {log_message_suffix}, user: {user_filter or 'All'}. Found {len(results)} records."
)
return jsonify({"success": True, "data": report})
except SQLAlchemyError as e:
current_app.logger.error(f"Database error generating weekly report: {e}")
return jsonify({"success": False, "message": "Database error generating report"}), 500
except Exception as e:
current_app.logger.exception(f"Unexpected error generating weekly report: {e}")
return jsonify({"success": False, "message": "Error generating report"}), 500
@reports_bp.route("/reports/monthly", methods=["GET"])
def get_monthly_report():
"""
Endpoint for retrieving monthly report data.
All dates are processed in UTC. Defaults to the current month.
Query Parameters:
user (str, optional): Filter results by username
# month (str, optional): Specific month (e.g., YYYY-MM) - Future enhancement
"""
current_app.logger.info("Monthly report API requested.")
try:
user_filter = request.args.get("user")
current_app.logger.info(
f"Fetching monthly report data for current month, user: {user_filter or 'All'}"
)
# Calculate start and end of the current UTC month
now_utc = datetime.utcnow()
current_month_start_utc = now_utc.replace(
day=1, hour=0, minute=0, second=0, microsecond=0
)
# Find the first day of the next month, then subtract one day to get the end of the current month
if current_month_start_utc.month == 12:
next_month_start_utc = current_month_start_utc.replace(
year=current_month_start_utc.year + 1, month=1, day=1
)
else:
next_month_start_utc = current_month_start_utc.replace(
month=current_month_start_utc.month + 1, day=1
)
current_month_end_utc = next_month_start_utc - timedelta(days=1)
start_date_param = current_month_start_utc.strftime("%Y-%m-%d")
end_date_param = current_month_end_utc.strftime("%Y-%m-%d")
current_app.logger.info(
f"Current UTC month: {start_date_param} to {end_date_param}"
)
results = fetch_duration_report(
time_period="monthly", # SQL groups by DATE_TRUNC('month',...)
user_filter=user_filter,
start_date=start_date_param, # Filters events within this month
end_date=end_date_param,
)
# Similar to weekly, SQL query with date filters should provide data aggregated for the current month.
# Previous Python-based aggregation for current month is removed.
# current_app.logger.info(f"Raw results usernames: {[r['user'] for r in results]}")
report = format_report_data(results, "monthly")
# current_app.logger.info(f"Formatted data usernames: {[r['user'] for r in report]}")
current_app.logger.info(
f"Successfully generated monthly report for {start_date_param[:7]}, user: {user_filter or 'All'}. Found {len(results)} records."
)
return jsonify({"success": True, "data": report})
except SQLAlchemyError as e:
current_app.logger.error(f"Database error generating monthly report: {e}")
return jsonify({"success": False, "message": "Database error generating report"}), 500
except Exception as e:
current_app.logger.exception(f"Unexpected error generating monthly report: {e}")
return jsonify({"success": False, "message": "Error generating report"}), 500
@reports_bp.route("/user-activity/<username>", methods=["GET"])
def get_user_activity(username):
"""
Gets detailed activity logs for a specific user.
All dates are processed in UTC.
Path Parameter:
username: Username to fetch activity for
Query Parameters:
start_date (str, optional): Start date in YYYY-MM-DD format (UTC)
end_date (str, optional): End date in YYYY-MM-DD format (UTC)
"""
current_app.logger.info(f"User activity logs requested for: {username}")
# Get date range from query parameters, default to current UTC day if not provided
start_date = request.args.get(
"start_date", datetime.utcnow().strftime("%Y-%m-%d")
) # Changed to utcnow
end_date = request.args.get(
"end_date", start_date
) # Default end_date to start_date if not provided
try:
current_app.logger.info(
f"Fetching activity logs for user: {username}, from: {start_date}, to: {end_date}"
)
results = fetch_user_activity(username, start_date, end_date)
activity_logs = format_user_activity(results)
current_app.logger.info(
f"Successfully retrieved {len(activity_logs)} activity records for user: {username}"
)
return jsonify(
{
"success": True,
"data": {
"username": username,
"start_date": start_date,
"end_date": end_date,
"activities": activity_logs,
},
}
)
except SQLAlchemyError as e:
current_app.logger.error(f"Database error retrieving user activity for {username}: {e}")
return jsonify({"success": False, "message": "Database error retrieving activity logs"}), 500
except Exception as e:
current_app.logger.exception(f"Unexpected error retrieving user activity for {username}: {e}")
return (
jsonify({"success": False, "message": "Error retrieving activity logs"}),
500,
)
@reports_bp.route("/user-states", methods=["GET"])
def get_user_states():
"""
Endpoint for retrieving the current state of all users.
Returns a JSON object with usernames as keys and state ('working' or 'not_working') as values.
A user is considered 'not_working' if:
1. Their last state was 'stopped', OR
2. Their last state was 'working' but they've been inactive for more than 10 minutes
"""
current_app.logger.info("User states API requested")
try:
# Automatically consider a user not working after 10 minutes of inactivity
auto_timeout_seconds = 10 * 60 # 10 minutes in seconds
# current_time = datetime.utcnow() # Not strictly needed here as SQL uses NOW()
# SQL query to get the most recent state for each user with auto-timeout logic
sql_query = f"""
WITH LatestEvents AS (
SELECT
e."user",
e.state,
e.ts,
ROW_NUMBER() OVER(PARTITION BY e."user" ORDER BY e.ts DESC) as rn,
-- Calculate the time difference between the last event and now
EXTRACT(EPOCH FROM (NOW() - e.ts)) as seconds_since_last_event
FROM
work_events e
)
SELECT
"user",
state,
ts,
CASE
-- Consider as not working if last state was stopped or if inactive for 10+ minutes
WHEN state = 'stopped' OR seconds_since_last_event > {auto_timeout_seconds} THEN 'not_working'
ELSE 'working'
END as current_state,
seconds_since_last_event
FROM
LatestEvents
WHERE
rn = 1
"""
results = db.session.execute(text(sql_query)).mappings().all()
# Convert to dictionary with username -> state
user_states = {}
for row in results:
# Use the calculated current_state from the SQL query
user_states[row["user"]] = row["current_state"]
# Log users with timeout-induced state changes for debugging
# And send POST request if user timed out
if (
row["state"] == "working"
and row["current_state"] == "not_working"
and row["seconds_since_last_event"] > auto_timeout_seconds
):
user_id = row["user"]
current_app.logger.debug(
f"User {user_id} timed out: last activity was {row['seconds_since_last_event']:.0f} seconds ago. Sending POST update."
)
post_url_path = "/api/user_status_update"
payload = {"user_id": user_id, "status": "not working"}
try:
# Construct absolute URL for the internal POST request.
# Using request.url_root is generally reliable within the same application.
# Ensure SERVER_NAME is configured in production if app is behind a proxy
# and request.url_root does not correctly reflect the public-facing scheme/host.
target_post_url = request.url_root.rstrip("/") + post_url_path
current_app.logger.info(
f"Sending POST to {target_post_url} with payload: {payload}"
)
response = requests.post(target_post_url, json=payload, timeout=10)
if response.ok: # Checks for 2xx status codes
current_app.logger.info(
f"Successfully updated status for user {user_id} via POST to {post_url_path}. Status: {response.status_code}"
)
else:
current_app.logger.error(
f"Failed to update status for user {user_id} via POST to {post_url_path}. Status: {response.status_code}, Response: {response.text}"
)
except requests.exceptions.RequestException as req_e:
current_app.logger.error(
f"Error sending POST request to {post_url_path} for user {user_id}: {req_e}"
)
current_app.logger.info(
f"Successfully retrieved states for {len(user_states)} users"
)
return jsonify({"success": True, "data": user_states})
except SQLAlchemyError as e:
current_app.logger.error(f"Database error retrieving user states: {e}")
return jsonify({"success": False, "message": "Database error retrieving user states"}), 500
except Exception as e:
current_app.logger.exception(f"Unexpected error retrieving user states: {e}")
return (
jsonify({"success": False, "message": "Error retrieving user states"}),
500,
)
@reports_bp.route("/reports/real_work_hours", methods=["GET"])
def get_real_work_hours_report():
"""
Endpoint for retrieving aggregated "real work hours" (40-minute blocks).
Shows all users active in the period, with their real work hours if any.
Query Parameters:
username (str, optional): Filter results by username.
start_date (str, optional): Start date for filtering (YYYY-MM-DD).
end_date (str, optional): End date for filtering (YYYY-MM-DD).
If only start_date is provided, end_date defaults to start_date.
If no dates, this might return a very large dataset or default to a recent period.
For this implementation, dates are strongly recommended.
"""
current_app.logger.info("Real work hours report API requested.")
try:
username_filter = request.args.get("username")
start_date_str = request.args.get("start_date")
end_date_str = request.args.get("end_date")
# Base query: select distinct user/date combinations from work_events within the period
# This ensures all users with activity in the period are potentially listed.
base_query = db.session.query(
WorkEvent.user.label("username"),
func.date(WorkEvent.ts).label("work_date")
).distinct()
if username_filter:
base_query = base_query.filter(WorkEvent.user == username_filter)
current_app.logger.info(f"Filtering real work hours for user: {username_filter}")
if start_date_str:
try:
start_date_obj = datetime.strptime(start_date_str, "%Y-%m-%d").date()
base_query = base_query.filter(func.date(WorkEvent.ts) >= start_date_obj)
current_app.logger.info(f"Filtering events from date: {start_date_str}")
if not end_date_str:
end_date_str = start_date_str
except ValueError:
return jsonify({"success": False, "message": "Invalid start_date format. Use YYYY-MM-DD."}), 400
else: # Require start_date for now to limit query scope if no user filter
if not username_filter:
# Default to today if no dates and no specific user is provided
start_date_obj = datetime.utcnow().date()
end_date_obj = start_date_obj
base_query = base_query.filter(func.date(WorkEvent.ts) == start_date_obj)
current_app.logger.info(f"Defaulting to date: {start_date_obj.strftime('%Y-%m-%d')} as no dates/user provided.")
# else if user is provided, we might allow fetching all their summaries without date filter by removing this else
if end_date_str:
try:
end_date_obj = datetime.strptime(end_date_str, "%Y-%m-%d").date()
base_query = base_query.filter(func.date(WorkEvent.ts) <= end_date_obj)
current_app.logger.info(f"Filtering events up to date: {end_date_str}")
except ValueError:
return jsonify({"success": False, "message": "Invalid end_date format. Use YYYY-MM-DD."}), 400
elif start_date_str: # Only start_date was given, end_date defaulted above
end_date_obj = start_date_obj
base_query = base_query.filter(func.date(WorkEvent.ts) <= end_date_obj)
# Subquery for active user-dates in the period
active_user_dates_subq = base_query.subquery('active_user_dates')
# Main query: LEFT JOIN UserRealWorkSummary with the active user-dates
final_query = db.session.query(
active_user_dates_subq.c.username,
active_user_dates_subq.c.work_date,
func.coalesce(UserRealWorkSummary.real_hours_counted, 0).label("real_hours_counted"),
UserRealWorkSummary.id.label("summary_id"), # Include id if needed from summary
UserRealWorkSummary.last_processed_event_id
).outerjoin(
UserRealWorkSummary,
and_(
active_user_dates_subq.c.username == UserRealWorkSummary.username,
active_user_dates_subq.c.work_date == UserRealWorkSummary.work_date
)
).order_by(active_user_dates_subq.c.work_date.desc(), active_user_dates_subq.c.username)
results = final_query.all()
report_data = [
{
"username": r.username,
"work_date": r.work_date.isoformat() if r.work_date else None,
"real_hours_counted": r.real_hours_counted,
# "summary_id": r.summary_id, # Optional
# "last_processed_event_id": r.last_processed_event_id # Optional
}
for r in results
]
current_app.logger.info(
f"Successfully generated real work hours report. Found {len(report_data)} user-date entries."
)
return jsonify({"success": True, "data": report_data})
except SQLAlchemyError as e:
current_app.logger.error(f"Database error generating real work hours report: {e}", exc_info=True)
return jsonify({"success": False, "message": "Database error generating report"}), 500
except Exception as e:
current_app.logger.exception(f"Unexpected error generating real work hours report: {e}", exc_info=True)
return jsonify({"success": False, "message": "Error generating report"}), 500

32
app/cli.py Normal file
View File

@ -0,0 +1,32 @@
"""
Custom Flask CLI commands.
"""
import click
from flask.cli import with_appcontext
from app.services.work_hours_service import calculate_and_store_real_work_hours
from app import db # Import db if direct session manipulation is needed, or rely on app context
from flask import current_app
@click.command("process-real-hours")
@with_appcontext
def process_real_hours_command():
"""
Manually triggers the calculation and storage of real work hours.
This processes WorkEvents and updates the UserRealWorkSummary table.
Useful for initial data backfilling, testing, or ad-hoc runs.
"""
current_app.logger.info("Manual trigger for process-real-hours command received.")
try:
calculate_and_store_real_work_hours(current_app)
current_app.logger.info("process-real-hours command finished successfully.")
except Exception as e:
current_app.logger.error(f"Error during process-real-hours command: {e}", exc_info=True)
click.echo(f"An error occurred: {e}")
def register_cli_commands(app):
"""
Registers custom CLI commands with the Flask application.
"""
app.cli.add_command(process_real_hours_command)
app.logger.info("Registered custom CLI commands.")

42
app/errors.py Normal file
View File

@ -0,0 +1,42 @@
"""
Global error handlers for the application.
This module provides centralized error handling functions for common HTTP errors.
"""
from flask import jsonify, request
def register_error_handlers(app):
"""
Register global error handlers with the Flask application.
Args:
app: Flask application instance
"""
@app.errorhandler(400)
def bad_request(error):
"""Handle 400 Bad Request errors."""
app.logger.warning(f"400 Bad Request error for URL: {request.url}")
return jsonify({"success": False, "message": "Bad request"}), 400
@app.errorhandler(404)
def not_found_error(error):
"""Handle 404 Not Found errors."""
app.logger.warning(f"404 Not Found error triggered for URL: {request.url}")
return jsonify({"success": False, "message": "Resource not found"}), 404
@app.errorhandler(405)
def method_not_allowed(error):
"""Handle 405 Method Not Allowed errors."""
app.logger.warning(f"405 Method Not Allowed error for URL: {request.url}")
return jsonify({"success": False, "message": "Method not allowed"}), 405
@app.errorhandler(500)
def internal_error(error):
"""Handle 500 Internal Server errors."""
# Note: The specific error causing the 500 might have already been logged
app.logger.error(f"Global 500 Internal Server error handler triggered: {error}")
return jsonify({"success": False, "message": "Internal server error"}), 500
app.logger.info("Registered global error handlers")

84
app/models.py Normal file
View File

@ -0,0 +1,84 @@
"""
Database models for employee workstation activity tracking.
This module defines the SQLAlchemy models used for storing activity events.
"""
from app import db
class WorkEvent(db.Model):
"""
Represents a user activity event with state transitions (working/stopped).
Attributes:
id (int): Primary key for the event
user (str): Username of the person whose activity is being tracked
state (str): Current activity state ('working' or 'stopped')
ts (datetime): Timestamp when the event occurred
"""
__tablename__ = "work_events"
id = db.Column(db.Integer, primary_key=True)
user = db.Column(db.String(100), nullable=False, index=True)
state = db.Column(db.String(10), nullable=False) # 'working' or 'stopped'
ts = db.Column(
db.DateTime,
nullable=False,
server_default=db.func.current_timestamp(),
index=True,
)
def __repr__(self):
"""Return a string representation of the model."""
return f"<WorkEvent(user='{self.user}', state='{self.state}', ts='{self.ts}')>"
def to_dict(self):
"""Convert model to dictionary for API responses."""
return {
"id": self.id,
"user": self.user,
"state": self.state,
"ts": self.ts.isoformat() if self.ts else None,
}
class UserRealWorkSummary(db.Model):
"""
Represents aggregated "real work hours" for a user on a specific date.
A "real work hour" is defined as 40 consecutive minutes of 'working' state.
Attributes:
id (int): Primary key for the summary record.
username (str): Username of the person (links to WorkEvent.user).
work_date (Date): The date for which the hours are summarized.
real_hours_counted (int): Number of 40-minute work blocks counted for this user on this date.
last_processed_event_id (int): The ID of the last WorkEvent processed to update this summary.
last_event_completed_block (bool): Indicates whether the last processed event completed a 40-minute block.
"""
__tablename__ = "user_real_work_summary"
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(255), nullable=False, index=True)
work_date = db.Column(db.Date, nullable=False, index=True)
real_hours_counted = db.Column(db.Integer, nullable=False, default=0)
last_processed_event_id = db.Column(db.Integer, db.ForeignKey('work_events.id'), nullable=True)
last_event_completed_block = db.Column(db.Boolean, nullable=False, default=False)
__table_args__ = (db.UniqueConstraint("username", "work_date", name="uq_user_work_date"),)
def __repr__(self):
"""Return a string representation of the model."""
return f"<UserRealWorkSummary(username='{self.username}', work_date='{self.work_date}', hours='{self.real_hours_counted}', last_event_id='{self.last_processed_event_id}', completed_block='{self.last_event_completed_block}')>"
def to_dict(self):
"""Convert model to dictionary for API responses."""
return {
"id": self.id,
"username": self.username,
"work_date": self.work_date.isoformat() if self.work_date else None,
"real_hours_counted": self.real_hours_counted,
"last_processed_event_id": self.last_processed_event_id,
"last_event_completed_block": self.last_event_completed_block,
}

72
app/scheduler.py Normal file
View File

@ -0,0 +1,72 @@
"""
APScheduler setup and configuration for background tasks.
"""
import functools
from flask_apscheduler import APScheduler
# Initialize scheduler globally
scheduler = APScheduler()
def job_wrapper(func, app_context, job_id):
"""
Wrapper to log job execution start and end.
"""
@functools.wraps(func)
def wrapper(*args, **kwargs):
app_context.logger.info(f"Job '{job_id}' starting...")
try:
result = func(*args, **kwargs)
app_context.logger.info(f"Job '{job_id}' completed successfully.")
return result
except Exception as e:
app_context.logger.error(f"Job '{job_id}' failed: {e}", exc_info=True)
# Optionally re-raise or handle as needed
raise
return wrapper
def init_scheduler(app):
"""
Initializes and starts the APScheduler with the Flask app.
Registers scheduled jobs.
"""
if not scheduler.running:
scheduler.init_app(app)
# Import jobs here to avoid circular dependencies if jobs use app context or db
from app.services.work_hours_service import calculate_and_store_real_work_hours
# Register the job to calculate real work hours
# Adjust interval as needed (e.g., every 10-15 minutes)
# Ensure job_id is unique if you add more jobs
if not app.config.get('TESTING', False): # Do not run scheduler during tests by default
# Check if job already exists to prevent duplicates during reloads in debug mode
if scheduler.get_job('calc_real_work_hours') is None:
# Create a partial function that includes the app context
# The original function expects the app object as its first argument
# Explicitly set force_recalculate=False for scheduler to avoid double-counting
wrapped_job_func = job_wrapper(
functools.partial(calculate_and_store_real_work_hours, app, force_recalculate=False), \
app, \
'calc_real_work_hours'\
)
scheduler.add_job(
id='calc_real_work_hours',
func=wrapped_job_func,
trigger='interval',
minutes=1, # Changed from 15 to 1 minute
replace_existing=True
)
app.logger.info("Scheduled job 'calc_real_work_hours' to run every 1 minute.")
else:
app.logger.info("Job 'calc_real_work_hours' already scheduled.")
try:
scheduler.start()
app.logger.info("APScheduler started.")
except Exception as e:
app.logger.error(f"Failed to start APScheduler: {e}", exc_info=True)
else:
app.logger.info("APScheduler not started in TEST mode.")
else:
app.logger.info("APScheduler already running.")

View File

@ -0,0 +1,320 @@
"""
Service layer for calculating and storing real work hours.
"""
from app import db
from app.models import WorkEvent, UserRealWorkSummary
from sqlalchemy import func, and_, extract
from sqlalchemy.dialects.postgresql import insert as pg_insert # For PostgreSQL UPSERT
from sqlalchemy.dialects.sqlite import insert as sqlite_insert # For SQLite UPSERT
import datetime
from flask import current_app
def calculate_and_store_real_work_hours(app, force_recalculate=False):
"""
Calculates real work hours (blocks of 40 consecutive working minutes)
from WorkEvent data and stores/updates them in UserRealWorkSummary.
This function is intended to be called periodically by a scheduler
and can also be triggered manually via a CLI command.
Args:
app: The Flask application instance
force_recalculate: If True, ignores last_processed_event_id and recalculates from scratch
"""
# Get the actual app instance. This is important for operations outside of a request context.
# current_app is a proxy; _get_current_object() gives the real app if an app context is active
# or if Flask-APScheduler has pushed one. To be absolutely sure, we grab the app
# and then push our own context for the duration of this job.
with app.app_context():
current_app.logger.info(f"Starting real work hours calculation... Force recalculate: {force_recalculate}")
processed_users_count = 0
total_hours_logged_session = 0
try:
# Get all distinct users who have work events
users_with_events = db.session.query(WorkEvent.user).distinct().all()
user_list = [u[0] for u in users_with_events]
for username in user_list:
current_app.logger.debug(f"Processing user: {username}")
# For each user, determine the starting point for fetching events.
# We process day by day to correctly attribute hours and manage last_processed_event_id.
# Get the minimum date for events for this user to establish a processing range.
min_event_date_result = db.session.query(func.min(func.date(WorkEvent.ts)))\
.filter(WorkEvent.user == username)\
.first()
if not min_event_date_result or not min_event_date_result[0]:
current_app.logger.debug(f"No events found for user: {username}")
continue
# Get the last processed event ID - set to 0 if force_recalculate is True
last_processed_id = 0
prior_event_completed_block = False
last_processed_event_object = None
if not force_recalculate:
# Only check for last processed ID if not forcing a recalculation
last_summary_for_user = UserRealWorkSummary.query\
.filter(UserRealWorkSummary.username == username)\
.order_by(UserRealWorkSummary.work_date.desc(), UserRealWorkSummary.last_processed_event_id.desc())\
.first()
last_processed_id = last_summary_for_user.last_processed_event_id if last_summary_for_user else 0
# Read the flag indicating if the very last processed event completed a block
prior_event_completed_block = last_summary_for_user.last_event_completed_block if last_summary_for_user else False
# Fetch the actual last processed event object to check its state
if last_processed_id > 0:
last_processed_event_object = WorkEvent.query.get(last_processed_id)
current_app.logger.debug(f"User {username}: Overall Last processed event ID: {last_processed_id}, Prior event completed block: {prior_event_completed_block}, Force recalculate: {force_recalculate}")
# LOOKBACK MECHANISM:
# If last_processed_id > 0 and not force_recalculate, look back up to 40 events to catch potential streaks
lookback_event_id = 0
if last_processed_id > 0 and not force_recalculate:
# Find the event ID to look back to (at least 40 events back if possible)
lookback_query = WorkEvent.query\
.filter(WorkEvent.user == username)\
.filter(WorkEvent.id <= last_processed_id)\
.order_by(WorkEvent.id.desc())\
.limit(40)
lookback_events = lookback_query.all()
if lookback_events and len(lookback_events) > 0:
# Get the ID of the oldest event in our lookback window
lookback_event_id = lookback_events[-1].id
current_app.logger.debug(f"Looking back from event ID {last_processed_id} to {lookback_event_id} for potential streaks")
# If we found a valid lookback ID, use it; otherwise, use last_processed_id
events_query = WorkEvent.query\
.filter(WorkEvent.user == username)
if force_recalculate:
# Start from the beginning if force_recalculate is True
current_app.logger.debug(f"Force recalculate set to True, processing all events for user {username}")
elif lookback_event_id > 0:
# Use the lookback ID to catch potential streaks
events_query = events_query.filter(WorkEvent.id >= lookback_event_id)
current_app.logger.debug(f"Using lookback ID {lookback_event_id} to catch potential streaks")
else:
# Original logic: only get events after the last processed ID
events_query = events_query.filter(WorkEvent.id > last_processed_id)
events_query = events_query.order_by(WorkEvent.ts.asc())
user_events = events_query.all()
if not user_events:
current_app.logger.debug(f"No new events to process for user: {username} since event ID {last_processed_id}")
continue
else:
current_app.logger.info(f"Fetched {len(user_events)} new events for user: {username} since event ID {lookback_event_id if lookback_event_id > 0 else last_processed_id}")
processed_users_count += 1
consecutive_working_minutes = 0
# last_event_ts = None # We need to maintain this relative to the streak
current_streak_start_ts = None
max_event_id_in_batch = last_processed_id
# Group events by date to process streaks within each day accurately
events_by_date = {}
for event in user_events:
# Skip events we've already processed (due to lookback), but only if not force_recalculate
if lookback_event_id > 0 and event.id <= last_processed_id and not force_recalculate:
# Skip updates but still count these events for streak calculation
current_app.logger.debug(f"Already processed event ID {event.id}, using for streak calculation only")
# Still track this event for streak calculations
event_date = event.ts.date()
if event_date not in events_by_date:
events_by_date[event_date] = []
events_by_date[event_date].append((event, True)) # True indicates already processed
else:
# New event we need to process fully
event_date = event.ts.date()
if event_date not in events_by_date:
events_by_date[event_date] = []
events_by_date[event_date].append((event, False if force_recalculate else (event.id <= last_processed_id))) # Mark as needs processing for force_recalculate
# If force_recalculate, clear all summary records for this user first
if force_recalculate:
delete_count = UserRealWorkSummary.query.filter_by(username=username).delete()
current_app.logger.info(f"Force recalculate: Deleted {delete_count} existing summary records for user {username}")
for work_date_obj, daily_events_with_flag in sorted(events_by_date.items()):
current_app.logger.debug(f"Processing date {work_date_obj} for user {username}")
consecutive_working_minutes = 0
current_streak_start_ts = None # Keep for potential future duration logic
# Extract just the events from the (event, flag) tuples
daily_events = [item[0] for item in daily_events_with_flag]
already_processed_flags = [item[1] for item in daily_events_with_flag]
# Initialize consecutive_working_minutes based on the last processed event
if not force_recalculate and last_processed_event_object and last_processed_event_object.state == 'working':
# Check if the last processed event is relevant to the current work_date_obj
# and is close in time to the first event in daily_events
# Only continue a streak if the last processed event DID NOT complete a block
if not prior_event_completed_block: # MODIFIED: Check the flag
is_same_day_continuation = (
last_processed_event_object.ts.date() == work_date_obj and
daily_events and
(daily_events[0].ts - last_processed_event_object.ts).total_seconds() < 120 # e.g., within 2 minutes
)
is_cross_midnight_continuation = ( # Renamed for clarity
last_processed_event_object.ts.date() == work_date_obj - datetime.timedelta(days=1) and
daily_events and
(daily_events[0].ts - last_processed_event_object.ts).total_seconds() < 120 and # Within 2 minutes
last_processed_event_object.ts.time() >= datetime.time(23, 58) # Ended late previous day
)
if is_same_day_continuation or is_cross_midnight_continuation:
current_app.logger.debug(f"User {username} on {work_date_obj}: Continuing streak from event ID {last_processed_event_object.id} (which did not complete a block).")
consecutive_working_minutes = 1 # Start with 1, the current event will make it 2 if 'working'
current_streak_start_ts = last_processed_event_object.ts
else:
current_app.logger.debug(f"User {username} on {work_date_obj}: Last processed event ID {last_processed_event_object.id} completed a block. Not continuing streak from it.")
# consecutive_working_minutes remains 0, current_streak_start_ts remains None
last_event_in_day_id = None
current_event_completed_block_for_day = False # Flag for the current day's summary
for i, event in enumerate(daily_events): # daily_events are already sorted by ts
max_event_id_in_batch = max(max_event_id_in_batch, event.id)
last_event_in_day_id = event.id
already_processed = already_processed_flags[i]
if event.state == 'working':
if current_streak_start_ts is None: # Start of a new potential streak
current_streak_start_ts = event.ts
consecutive_working_minutes = 1
current_app.logger.debug(f"User {username} on {work_date_obj}: New 'working' streak started at {event.ts}, consecutive_minutes: {consecutive_working_minutes}")
else:
# Simplification: if the previous event in the list was 'working' and part of the streak, increment.
consecutive_working_minutes += 1
current_app.logger.debug(f"User {username} on {work_date_obj}: Continued 'working' streak, event at {event.ts}, consecutive_minutes: {consecutive_working_minutes}")
else: # state is 'stopped' or other non-working
if consecutive_working_minutes > 0:
current_app.logger.debug(f"User {username} on {work_date_obj}: Streak of {consecutive_working_minutes} mins ended by non-working state ('{event.state}') at {event.ts}.")
consecutive_working_minutes = 0
current_streak_start_ts = None
# Reset flag for current event, will be set true if it completes a block
event_completes_this_block = False
if consecutive_working_minutes == 40: # 40 consecutive minutes
logged_hours = 1 # count as 1 real hour
current_app.logger.info(f"User {username} completed a {consecutive_working_minutes}-minute work block on {work_date_obj} at {event.ts}. Logging {logged_hours} real hours.")
# Only increment the total and perform database updates if this is a NEW event or force_recalculate
if not already_processed or force_recalculate:
total_hours_logged_session += logged_hours
event_completes_this_block = True # This event completed a block
# UPSERT logic for UserRealWorkSummary
summary_data = {
'username': username,
'work_date': work_date_obj,
'real_hours_counted': logged_hours,
'last_processed_event_id': event.id,
'last_event_completed_block': True # MODIFIED: Set flag to True
}
current_app.logger.debug(f"User {username} on {work_date_obj}: Preparing UPSERT data: {summary_data}")
# Determine dialect for UPSERT
dialect = db.engine.dialect.name
if dialect == 'postgresql':
stmt = pg_insert(UserRealWorkSummary).values(**summary_data)
stmt = stmt.on_conflict_do_update(
index_elements=['username', 'work_date'],
set_=dict(real_hours_counted=UserRealWorkSummary.real_hours_counted + logged_hours,
last_processed_event_id=event.id,
last_event_completed_block=True # MODIFIED: Set flag to True
)
)
current_app.logger.debug(f"User {username} on {work_date_obj}: PostgreSQL UPSERT statement created.")
elif dialect == 'sqlite':
stmt = sqlite_insert(UserRealWorkSummary).values(**summary_data)
stmt = stmt.on_conflict_do_update(
index_elements=['username', 'work_date'],
set_=dict(real_hours_counted=UserRealWorkSummary.real_hours_counted + logged_hours,
last_processed_event_id=event.id,
last_event_completed_block=True # MODIFIED: Set flag to True
)
)
current_app.logger.debug(f"User {username} on {work_date_obj}: SQLite UPSERT statement created.")
else: # Generic fallback, might not be truly atomic or performant
current_app.logger.warning(f"User {username} on {work_date_obj}: Dialect {dialect} not explicitly handled for UPSERT, attempting merge.")
# This part is tricky without specific dialect handling for atomic increment.
# For a generic approach, one might query then insert/update, which is not atomic.
# Given the common DBs (SQLite/Postgres), the above should cover typical cases.
# Let's ensure we have a record first, then update.
summary_rec = UserRealWorkSummary.query.filter_by(username=username, work_date=work_date_obj).first()
if summary_rec:
current_app.logger.debug(f"User {username} on {work_date_obj}: Found existing summary record, ID: {summary_rec.id}, hours: {summary_rec.real_hours_counted}. Will add {logged_hours} hours.")
summary_rec.real_hours_counted += logged_hours
summary_rec.last_processed_event_id = event.id
summary_rec.last_event_completed_block = True # MODIFIED: Set flag to True
else:
summary_rec = UserRealWorkSummary(**summary_data) # summary_data already has the flag
db.session.add(summary_rec)
current_app.logger.debug(f"User {username} on {work_date_obj}: No existing summary record found. Creating new one with {logged_hours} hours.")
db.session.commit() # Commit here for fallback
current_app.logger.info(f"User {username} on {work_date_obj}: Fallback UPSERT committed. Hours: {summary_rec.real_hours_counted}, Last Event ID: {summary_rec.last_processed_event_id}")
if dialect in ['postgresql', 'sqlite']:
db.session.execute(stmt)
current_app.logger.info(f"User {username} on {work_date_obj}: Dialect-specific UPSERT executed. Last Event ID: {event.id}. Hours increment: {logged_hours}.")
else:
current_app.logger.debug(f"User {username} on {work_date_obj}: Skipping database update for already processed event ID {event.id} but counting streak.")
consecutive_working_minutes = 0 # Reset streak
current_streak_start_ts = None
current_app.logger.debug(f"User {username} on {work_date_obj}: Streak reset after logging hours.")
# After processing all events for a specific date for a user, update the last_processed_event_id for that day
# AND the last_event_completed_block status based on the *last event of that day*.
# This needs to be done carefully if the last event of the day was part of a block completion.
# The current logic updates last_event_completed_block during the UPSERT when a block IS completed.
# If the day ends, and the last event didn't complete a block, we need to ensure the summary for the day reflects that.
current_event_completed_block_for_day = event_completes_this_block
# Skip this update if we didn't process any new events (all were from lookback and not force_recalculate)
if last_event_in_day_id is not None and (any(not flag for flag in already_processed_flags) or force_recalculate):
summary_rec_for_day = UserRealWorkSummary.query.filter_by(username=username, work_date=work_date_obj).first()
if summary_rec_for_day:
summary_rec_for_day.last_processed_event_id = last_event_in_day_id
# Update the flag based on whether the *last event processed for this day* completed a block
summary_rec_for_day.last_event_completed_block = current_event_completed_block_for_day
current_app.logger.debug(f"User {username} on {work_date_obj}: Updating existing summary. Last Event ID: {last_event_in_day_id}, Completed Block: {current_event_completed_block_for_day}")
else:
# If no hours were logged but events were processed for this day.
db.session.merge(UserRealWorkSummary(
username=username,
work_date=work_date_obj,
real_hours_counted=0,
last_processed_event_id=last_event_in_day_id,
last_event_completed_block=current_event_completed_block_for_day # MODIFIED
))
current_app.logger.debug(f"User {username} on {work_date_obj}: Merged new summary record for day with 0 hours. Last Event ID: {last_event_in_day_id}, Completed Block: {current_event_completed_block_for_day}")
# current_app.logger.debug(f"Updated last_processed_event_id for {username} on {work_date_obj} to {last_event_in_day_id}") # Covered by above log
db.session.commit() # Commit all changes for the current user
current_app.logger.info(f"Committed all pending DB changes for user: {username}. Overall max event ID processed in this batch for user: {max_event_id_in_batch}")
current_app.logger.info(f"Real work hours calculation completed. Processed {processed_users_count} users. Logged {total_hours_logged_session} new real hour blocks this session.")
except Exception as e:
db.session.rollback()
current_app.logger.error(f"Error during real work hours calculation: {e}", exc_info=True)
finally:
db.session.remove() # Ensure session is cleaned up

15
app/utils/__init__.py Normal file
View File

@ -0,0 +1,15 @@
"""
Utility functions package for employee workstation activity tracking.
This package contains helper functions for building queries and formatting data.
"""
from app.utils.formatting import format_report_data, format_user_activity
from app.utils.queries import calculate_duration_sql
__all__ = [
"calculate_duration_sql",
# "filter_sql_by_user", # Removed as it's obsolete
"format_report_data",
"format_user_activity",
]

96
app/utils/formatting.py Normal file
View File

@ -0,0 +1,96 @@
"""
Data formatting functions for employee workstation activity tracking.
This module contains functions for formatting database query results into API responses.
"""
from flask import current_app
def format_report_data(results, time_period):
"""
Formats the raw database results into a list of dictionaries for the API.
Assumes the input `results` are already correctly aggregated by user and period from the SQL query.
Args:
results (list): List of database result rows (SQLAlchemy RowMapping objects)
time_period (str): Time period of the report ('daily', 'weekly', or 'monthly')
Returns:
list: List of formatted dictionaries for API response
"""
current_app.logger.debug(
f"Formatting report data for period: {time_period}. Input rows: {len(results)}"
)
period_key_map = {"daily": "day", "weekly": "week_start", "monthly": "month_start"}
period_key = period_key_map.get(time_period, "period_start")
formatted_data = []
for row in results:
period_value = row["period_start"]
if hasattr(period_value, "isoformat"):
period_value = period_value.isoformat()
first_login_time = row["first_login_time"]
if hasattr(first_login_time, "isoformat"):
first_login_time = first_login_time.isoformat()
duration_hours = row["total_hours"]
if duration_hours is None:
duration_hours = 0.0
else:
duration_hours = float(duration_hours)
formatted_data.append(
{
"user": row["user"],
period_key: period_value,
"duration_hours": duration_hours,
"first_login_time": first_login_time,
}
)
current_app.logger.debug(
f"Formatted report data created. Output rows: {len(formatted_data)}"
)
return formatted_data
def format_user_activity(results):
"""
Formats the raw user activity results into a list of dictionaries.
Args:
results (list): List of database result rows (SQLAlchemy RowMapping objects)
Returns:
list: List of formatted user activity dictionaries
"""
formatted_data = []
for row in results:
start_time = row["start_time"]
end_time = row["end_time"]
# Format timestamps for display
if hasattr(start_time, "isoformat"):
start_time = start_time.isoformat()
if hasattr(end_time, "isoformat"):
end_time = end_time.isoformat()
# Format duration as float
duration = (
float(row["session_duration_hours"])
if row["session_duration_hours"] is not None
else 0.0
)
formatted_data.append(
{
"date": row["work_date"].isoformat()
if hasattr(row["work_date"], "isoformat")
else str(row["work_date"]),
"start_time": start_time,
"end_time": end_time,
"duration_hours": round(duration, 2),
}
)
return formatted_data

212
app/utils/queries.py Normal file
View File

@ -0,0 +1,212 @@
"""
SQL query building functions for employee workstation activity tracking.
This module contains functions for building SQL queries to calculate working durations.
"""
from datetime import datetime, timedelta # Added timedelta
from flask import current_app
# Define timeout constants - consider moving to config if shared across modules
# This timeout is for capping session duration in reports if no explicit stop event is found soon after a start.
REPORT_QUERY_SESSION_TIMEOUT_SECONDS = 15 * 60 # 15 minutes
def calculate_duration_sql(
time_period, user_filter=None, start_date_filter=None, end_date_filter=None
):
"""
Generates the core SQL query to calculate working durations.
Uses LEAD() window function to pair 'working' with the next event.
Calculates duration in hours using PostgreSQL functions.
Args:
time_period (str): Time period to group by ('daily', 'weekly', or 'monthly').
user_filter (str, optional): Username to filter results by (exact match).
start_date_filter (str, optional): Start date for filtering events (YYYY-MM-DD).
end_date_filter (str, optional): End date for filtering events (YYYY-MM-DD).
Returns:
str: SQL query for calculating durations.
dict: Parameters for the SQL query.
"""
params = {}
date_conditions = []
if start_date_filter:
date_conditions.append("w_events.ts >= :start_date_param")
params["start_date_param"] = start_date_filter
if end_date_filter:
# Add 1 day to end_date_filter to make it inclusive of the whole day
# Or expect end_date_filter to be exclusive upper bound already.
# Assuming YYYY-MM-DD means start of that day.
# To include the entire end_date_filter day, we query < (end_date_filter + 1 day)
# For simplicity, let's assume date filters are applied directly to DATE(ts)
# or that ts is already a date for daily grouping if filters apply to period_start.
# For now, let's filter the base events.
# If end_date_filter is '2023-10-20', events up to '2023-10-20 23:59:59.999' should be included.
# So ts < '2023-10-21'
try:
end_date_dt = datetime.strptime(end_date_filter, "%Y-%m-%d")
exclusive_end_date_dt = end_date_dt + timedelta(days=1)
params["end_date_param_exclusive"] = exclusive_end_date_dt.strftime(
"%Y-%m-%d"
)
date_conditions.append("w_events.ts < :end_date_param_exclusive")
except ValueError:
current_app.logger.warning(
f"Invalid end_date_filter format: {end_date_filter}. Ignoring."
)
event_selection_where_clause = ""
if date_conditions:
event_selection_where_clause = "WHERE " + " AND ".join(date_conditions)
# For virtual event injection, also apply date filters if possible to limit scope
# This part is tricky because virtual events are based on existing events.
# If an event is just outside start_date_filter but its virtual stop is inside, how to handle?
# For now, primary date filtering is on `public.work_events w_events`.
user_where_clause_for_all_events = ""
user_group_by_filter_final = ""
if user_filter:
user_where_clause_for_all_events = f"{'AND' if event_selection_where_clause else 'WHERE'} w_events.\"user\" = :user_param"
user_group_by_filter_final = 'WHERE cd."user" = :user_param'
params["user_param"] = user_filter
# Conditional SQL parts for the 'w' alias in AllEvents UNION
user_filter_condition_for_w = ""
if user_filter:
user_filter_condition_for_w = 'AND w."user" = :user_param'
start_date_condition_for_w = ""
if "start_date_param" in params:
start_date_condition_for_w = "AND w.ts >= :start_date_param"
end_date_condition_for_w = ""
if "end_date_param_exclusive" in params:
end_date_condition_for_w = "AND w.ts < :end_date_param_exclusive"
period_grouping_options = {
"daily": "DATE_TRUNC('day', start_time)",
"weekly": "DATE_TRUNC('week', start_time)",
"monthly": "DATE_TRUNC('month', start_time)",
}
period_grouping = period_grouping_options.get(time_period)
if not period_grouping:
current_app.logger.warning(
f"Invalid time_period '{time_period}' in calculate_duration_sql, defaulting to 'daily'."
)
period_grouping = period_grouping_options["daily"]
# SQL query structure
sql_query = f"""
WITH AllEvents AS (
-- Select base events within the date range and for the specific user if provided
SELECT
w_events.\"user\",
w_events.ts,
w_events.state
FROM public.work_events w_events
{event_selection_where_clause}
{user_where_clause_for_all_events}
UNION ALL
-- Add virtual "stopped" events for users who have been inactive for more than REPORT_QUERY_SESSION_TIMEOUT_SECONDS
-- This part needs to be mindful of the user_filter and date_filter as well.
-- We select from work_events `w` here. If `w` is filtered by user/date, this UNION part is also filtered.
SELECT
w.\"user\",
w.ts + INTERVAL '{REPORT_QUERY_SESSION_TIMEOUT_SECONDS} seconds' AS ts,
'stopped' AS state
FROM public.work_events w
-- Ensure the base 'working' event for virtual stop also respects date and user filters
-- The primary selection for `w` is implicitly filtered if `w_events` in the top part of UNION is filtered.
-- However, to be explicit or if `w` needs its own alias for clarity:
-- Let's assume `w` here implicitly respects the outer scope if it was part of a larger subquery,
-- but in a UNION, it's a separate SELECT. So, `w` also needs filtering.
-- Re-applying filters here to `w` to ensure virtual stops are only for relevant events.
WHERE w.state = 'working'
{user_filter_condition_for_w}
{start_date_condition_for_w}
{end_date_condition_for_w}
AND NOT EXISTS (
SELECT 1
FROM public.work_events w2
WHERE w2.\"user\" = w.\"user\"
AND w2.ts > w.ts
AND w2.ts <= w.ts + INTERVAL '{REPORT_QUERY_SESSION_TIMEOUT_SECONDS} seconds'
)
AND NOT EXISTS (
SELECT 1
FROM public.work_events w3
WHERE w3.\"user\" = w.\"user\"
AND w3.state = 'stopped'
AND w3.ts = w.ts + INTERVAL '{REPORT_QUERY_SESSION_TIMEOUT_SECONDS} seconds'
)
),
EventPairs AS (
-- Pair events using LEAD function
SELECT
\"user\",
ts AS start_time,
state,
LEAD(ts) OVER (PARTITION BY \"user\" ORDER BY ts) AS next_event_time,
LEAD(state) OVER (PARTITION BY \"user\" ORDER BY ts) AS next_event_state
FROM AllEvents
-- No user filter here, it's applied in AllEvents and final select
ORDER BY \"user\", ts
),
TimeoutAdjusted AS (
-- Adjust end time for sessions exceeding timeout
SELECT
\"user\",
start_time,
state,
CASE
WHEN state = 'working' AND next_event_time IS NOT NULL
AND EXTRACT(EPOCH FROM (next_event_time - start_time)) > {REPORT_QUERY_SESSION_TIMEOUT_SECONDS}
THEN start_time + INTERVAL '{REPORT_QUERY_SESSION_TIMEOUT_SECONDS} seconds'
WHEN state = 'working' AND next_event_time IS NULL -- Dangling working event, cap at timeout from start_time or now
THEN LEAST(start_time + INTERVAL '{REPORT_QUERY_SESSION_TIMEOUT_SECONDS} seconds', NOW() AT TIME ZONE 'UTC')
ELSE next_event_time
END AS adjusted_end_time,
next_event_state
FROM EventPairs
),
CalculatedDurations AS (
-- Calculate duration for each working session and find first login time
SELECT
\"user\",
{period_grouping} AS period_start,
SUM(
CASE
WHEN state = 'working' AND adjusted_end_time IS NOT NULL THEN
GREATEST(0, EXTRACT(EPOCH FROM (adjusted_end_time - start_time)) / 3600.0) -- Ensure non-negative
ELSE 0
END
) AS total_hours,
MIN(CASE WHEN state = 'working' THEN start_time END) AS first_login_time
FROM TimeoutAdjusted
WHERE state = 'working'
GROUP BY \"user\", period_start
)
-- Final aggregation and user filtering if applicable
SELECT
cd.\"user\",
cd.period_start,
SUM(cd.total_hours) AS total_hours,
MIN(cd.first_login_time) AS first_login_time
FROM CalculatedDurations cd
{user_group_by_filter_final}
GROUP BY cd.\"user\", cd.period_start
ORDER BY cd.\"user\", cd.period_start DESC;
"""
current_app.logger.debug(f"Generated SQL query for reports: {sql_query}")
current_app.logger.debug(f"SQL query parameters: {params}")
return sql_query, params
# The filter_sql_by_user function is now obsolete and has been removed.

9
app/views/__init__.py Normal file
View File

@ -0,0 +1,9 @@
"""
Web views package for employee workstation activity tracking.
This package contains the web routes for the dashboard interface.
"""
from app.views.dashboard import views_bp
__all__ = ["views_bp"]

33
app/views/dashboard.py Normal file
View File

@ -0,0 +1,33 @@
"""
Web routes for the dashboard interface.
This module provides the HTML routes for serving the dashboard interface.
"""
from flask import Blueprint, current_app, render_template
# Create a blueprint for dashboard views
views_bp = Blueprint("views", __name__, url_prefix="/")
@views_bp.route("/")
def dashboard():
"""
Renders the main dashboard interface.
Returns:
HTML template: The dashboard template
"""
current_app.logger.info("Dashboard page requested.")
return render_template("dashboard.html")
# Redundant healthcheck removed, one exists in app/__init__.py
# @views_bp.route('/healthcheck')
# def healthcheck():
# """
# Simple health check endpoint for monitoring.
#
# Returns:
# JSON: Status indicator
# """
# return {'status': 'ok'}, 200

313
client_tools/report.ps1 Normal file
View File

@ -0,0 +1,313 @@
#Requires -Version 5.1
<#
.SYNOPSIS
Tracks user activity and reports working/stopped status to a central server.
.DESCRIPTION
This PowerShell script monitors user idle time and reports state changes
to a central Flask API. It runs at user logon via Task Scheduler and
maintains a state machine between "working" and "stopped" states based
on a 6-minute inactivity threshold.
.NOTES
File Name : report.ps1
Author : IT Department
Version : 1.2
#>
# Hide the PowerShell console window
Add-Type -Name Window -Namespace Console -MemberDefinition '
[DllImport("Kernel32.dll")]
public static extern IntPtr GetConsoleWindow();
[DllImport("user32.dll")]
public static extern bool ShowWindow(IntPtr hWnd, Int32 nCmdShow);
'
$consolePtr = [Console.Window]::GetConsoleWindow()
[void][Console.Window]::ShowWindow($consolePtr, 0) # 0 = hide
# Configuration
$ApiUrl = "http://localhost:5000/api/report" # Default value, can be overridden by environment variable
$IdleThresholdMinutes = 10 # User idle time in minutes before state changes to 'stopped'.
$LogFilePath = Join-Path $env:TEMP "user_work_tracking_client.log"
$pollIntervalSeconds = 60 # How often to check user activity (in seconds).
$reportIntervalMinutes = 1 # How often to send a status update if state hasn't changed (in minutes). With a 1-minute poll interval, this ensures 'working' state is reported every minute. State changes are reported immediately.
$lastReportTime = [DateTime]::MinValue # Initialize last report time
# Helper Function for Logging
function Write-Log($Message) {
$timestamp = Get-Date -Format 'yyyy-MM-dd HH:mm:ss'
try {
"$timestamp - $Message" | Out-File -Append -FilePath $LogFilePath -Encoding UTF8 -ErrorAction Stop
} catch {
Write-Warning "Failed to write to log file '$LogFilePath': $($_.Exception.Message)"
# Optionally, write to console as fallback
Write-Host "$timestamp [LOG FALLBACK] - $Message"
}
}
Write-Log "================ Script Started ==================="
# Try to read config from environment or a local config file
try {
# Check for environment variable first
if ($env:API_ENDPOINT) {
$ApiUrl = $env:API_ENDPOINT
Write-Log "Using API URL from environment variable: $ApiUrl"
}
else {
# Look for a config file
$configPath = Join-Path ([System.IO.Path]::GetDirectoryName($PSCommandPath)) "config.env"
if (Test-Path $configPath) {
Write-Log "Found config file at $configPath"
Get-Content $configPath | ForEach-Object {
if ($_ -match '^API_ENDPOINT=(.*)$') {
$ApiUrl = $matches[1].Trim('"')
Write-Log "Using API URL from config file: $ApiUrl"
}
if ($_ -match '^IDLE_THRESHOLD_MINUTES=(\d+)$') {
$IdleThresholdMinutes = [int]$matches[1]
Write-Log "Using idle threshold from config file: $IdleThresholdMinutes minutes"
}
if ($_ -match '^POLL_INTERVAL_SECONDS=(\d+)$') {
$pollIntervalSeconds = [int]$matches[1]
Write-Log "Using poll interval from config file: $pollIntervalSeconds seconds"
}
if ($_ -match '^REPORT_INTERVAL_MINUTES=(\d+)$') {
$reportIntervalMinutes = [int]$matches[1]
Write-Log "Using report interval from config file: $reportIntervalMinutes minutes"
}
}
}
else {
Write-Log "No config file found. Using default values."
}
}
}
catch {
Write-Log "Error reading configuration: $_. Using default values."
}
Write-Log "Using API URL: $ApiUrl"
Write-Log "Idle Threshold (Minutes): $IdleThresholdMinutes"
Write-Log "Report Interval (Minutes): $reportIntervalMinutes"
# Initiate state
$currentState = "working" # Start in working state (user just logged in)
$lastReportedState = $null
# Function to get idle time using quser command
function Get-IdleTime {
try {
# First try the Win32 API method (more reliable)
if (-not ([System.Management.Automation.PSTypeName]'IdleTime').Type) {
Add-Type @'
using System;
using System.Runtime.InteropServices;
public class IdleTime {
[DllImport("user32.dll")]
public static extern bool GetLastInputInfo(ref LASTINPUTINFO plii);
public static TimeSpan GetIdleTime() {
LASTINPUTINFO lastInput = new LASTINPUTINFO();
lastInput.cbSize = (uint)Marshal.SizeOf(lastInput);
if (GetLastInputInfo(ref lastInput)) {
uint currentTickCount = (uint)Environment.TickCount;
uint idleTicks = currentTickCount - lastInput.dwTime;
return TimeSpan.FromMilliseconds(idleTicks);
} else {
return TimeSpan.Zero;
}
}
[StructLayout(LayoutKind.Sequential)]
public struct LASTINPUTINFO {
public uint cbSize;
public uint dwTime;
}
}
'@
}
# Use the Win32 API method
$idleTime = [IdleTime]::GetIdleTime()
$idleMinutes = $idleTime.TotalMinutes
Write-Log "Win32 API reports idle time: $idleMinutes minutes"
return $idleTime
}
catch {
Write-Log "Error using Win32 API for idle time: $_. Falling back to quser"
# Fallback: Try the quser command method
try {
$quser = quser $env:USERNAME | Select-Object -Skip 1
$idleText = ($quser -replace '\s{2,}', ',').Split(',')[3]
# If no idle time is reported, return 0
if ($idleText -eq "." -or $idleText -eq "none" -or $idleText -eq $null) {
Write-Log "quser reports no idle time"
return [TimeSpan]::Zero
}
# Parse HH:MM format
if ($idleText -match '(\d+):(\d+)') {
$hours = [int]$Matches[1]
$minutes = [int]$Matches[2]
$result = [TimeSpan]::FromMinutes(($hours * 60) + $minutes)
Write-Log "quser parsed idle time: $($result.TotalMinutes) minutes"
return $result
}
# Parse "MM+" format (represents minutes)
if ($idleText -match '(\d+)\+') {
$minutes = [int]$Matches[1]
$result = [TimeSpan]::FromMinutes($minutes)
Write-Log "quser parsed idle time: $($result.TotalMinutes) minutes"
return $result
}
# Default to zero if we couldn't parse
Write-Log "quser couldn't parse idle format: '$idleText'"
return [TimeSpan]::Zero
}
catch {
Write-Log "Error getting idle time via quser: $_"
return [TimeSpan]::Zero
}
}
}
# Function to report state to the API
function Send-StateReport {
param (
[string]$State
)
# Generate timestamp in UTC format to ensure time sync with server
# The server processes all timestamps in UTC
$utcTimestamp = (Get-Date).ToUniversalTime().ToString("o") # ISO 8601 format
$payload = @{
user = $env:USERNAME
state = $State
ts = $utcTimestamp
} | ConvertTo-Json
Write-Log "Preparing to send payload with UTC timestamp: $utcTimestamp"
Write-Log "Full payload: $payload"
try {
Write-Log "Attempting API call to $ApiUrl"
# Send to API
$response = Invoke-RestMethod -Uri $ApiUrl `
-Method Post `
-Body $payload `
-ContentType "application/json" `
-ErrorAction Stop
Write-Log "API call successful. Response: $($response | ConvertTo-Json -Depth 3)"
# Update last report time
$script:lastReportTime = Get-Date
# Check for success field in response
if ($response.success -eq $true) {
Write-Log "State '$State' reported successfully."
return $true
} else {
Write-Log "API indicated failure. Message: $($response.message)"
return $false
}
}
catch {
# Log error details
$errorMessage = "Failed to report state '$State' to API. Error: $($_.Exception.Message)"
if ($_.Exception.Response) {
$statusCode = $_.Exception.Response.StatusCode
$statusDescription = $_.Exception.Response.StatusDescription
$errorMessage += " Status Code: $statusCode ($statusDescription)"
try {
$responseBody = $_.Exception.Response.GetResponseStream()
$streamReader = New-Object System.IO.StreamReader($responseBody)
$errorBody = $streamReader.ReadToEnd()
$streamReader.Close()
$responseBody.Close()
$errorMessage += " Response Body: $errorBody"
} catch {
$errorMessage += " (Could not read error response body)"
}
}
Write-Log $errorMessage
return $false
}
}
# Initial state report at startup (user just logged in = working)
if (Send-StateReport -State $currentState) {
$lastReportedState = $currentState
Write-Log "Initial state reported as '$currentState'"
} else {
Write-Log "Failed to report initial state. Will retry on next check."
}
# Main monitoring loop
Write-Log "Starting activity monitoring with $IdleThresholdMinutes minute idle threshold"
try {
while ($true) {
# Get current idle time
$idleTime = Get-IdleTime
$idleMinutes = $idleTime.TotalMinutes
Write-Log "Idle time check: $idleMinutes minutes."
# If idle time couldn't be determined, log and wait
if ($idleTime -lt 0) {
Write-Log "Error getting idle time. Waiting for next check."
Start-Sleep -Seconds $pollIntervalSeconds
continue
}
# Determine state based on idle time
$newState = if ($idleMinutes -ge $IdleThresholdMinutes) { "stopped" } else { "working" }
Write-Log "Determined state: $newState (Current: $currentState)"
# Check if it's time to send a periodic report (every 1 minute)
$timeSinceLastReport = (Get-Date) - $lastReportTime
$shouldSendPeriodicReport = $timeSinceLastReport.TotalMinutes -ge $reportIntervalMinutes
Write-Log "Time since last report: $($timeSinceLastReport.TotalMinutes.ToString("F2")) minutes (threshold: $reportIntervalMinutes)"
# If state changed or it's time for periodic report, send it
if ($newState -ne $currentState -or $shouldSendPeriodicReport) {
# If it's a state change, update current state
if ($newState -ne $currentState) {
Write-Log "State changed from '$currentState' to '$newState'. Reporting to API."
$currentState = $newState
} else {
Write-Log "Sending periodic report (current state: $currentState)"
}
# Report to API
try {
if (Send-StateReport -State $currentState) {
$lastReportedState = $currentState
}
}
catch {
Write-Log "Error reporting state: $_"
}
}
# Sleep until next check
Start-Sleep -Seconds $pollIntervalSeconds
}
}
catch {
Write-Log "Critical error in main loop: $_. Script will attempt to report 'stopped' state in finally block."
exit 1 # Exit with an error code, but finally block will still run
}
finally {
Write-Log "Script ending (e.g., user logoff, task stop, or after error). Attempting to report 'stopped' state."
Send-StateReport -State "stopped"
Write-Log "Final 'stopped' state report attempt made."
Write-Log "================ Script Ended (via Finally) ================"
}

View File

@ -0,0 +1,43 @@
Set WshShell = CreateObject("WScript.Shell")
Set FSO = CreateObject("Scripting.FileSystemObject")
' Create a log file to help with troubleshooting
LogPath = FSO.BuildPath(FSO.GetSpecialFolder(2), "user_tracking_launcher.log") ' Temp folder
Set LogFile = FSO.OpenTextFile(LogPath, 8, True) ' 8 = ForAppending
LogFile.WriteLine(Now & " - VBS Launcher started")
' Get the current script directory
ScriptDir = FSO.GetParentFolderName(WScript.ScriptFullName)
LogFile.WriteLine(Now & " - Script directory: " & ScriptDir)
' Get the path to the PowerShell script
PowerShellScript = FSO.BuildPath(ScriptDir, "report.ps1")
LogFile.WriteLine(Now & " - PowerShell script path: " & PowerShellScript)
' Check if the PowerShell script exists
If FSO.FileExists(PowerShellScript) Then
LogFile.WriteLine(Now & " - PowerShell script exists")
Else
LogFile.WriteLine(Now & " - ERROR: PowerShell script does not exist!")
End If
' Build the command
PowerShellPath = "powershell.exe"
ScriptPath = """" & PowerShellScript & """"
Cmd = PowerShellPath & " -NoProfile -ExecutionPolicy Bypass -WindowStyle Hidden -File " & ScriptPath
LogFile.WriteLine(Now & " - Command: " & Cmd)
' Run the command
On Error Resume Next
WshShell.Run Cmd, 0, False
If Err.Number <> 0 Then
LogFile.WriteLine(Now & " - ERROR: " & Err.Description)
Else
LogFile.WriteLine(Now & " - Command executed successfully")
End If
On Error Goto 0
LogFile.Close
Set LogFile = Nothing
Set FSO = Nothing
Set WshShell = Nothing

View File

@ -0,0 +1,116 @@
#Requires -Version 3.0
#Requires -RunAsAdministrator
<#
.SYNOPSIS
Creates a scheduled task to run the user activity tracking script at logon.
.DESCRIPTION
This script creates a Windows Task Scheduler task that launches the
run_hidden.vbs (which in turn runs report.ps1) script whenever a user
logs into the system. It must be run with administrative privileges.
.NOTES
File Name : schedule_task.ps1
Author : IT Department
Version : 1.5 (Modified to schedule run_hidden.vbs)
.EXAMPLE
.\schedule_task.ps1 -ScriptPath "C:\Path\To\report.ps1"
(Ensure run_hidden.vbs is in the same directory as report.ps1)
# To avoid security warnings, you can unblock the files first:
# Unblock-File -Path schedule_task.ps1
# Unblock-File -Path report.ps1
# Unblock-File -Path run_hidden.vbs
#>
param (
[Parameter(Mandatory=$true)]
[string]$ScriptPath # Path to the main report.ps1 script
)
# Begin by unblocking the files to prevent security warnings
Write-Host "Attempting to unblock script files..."
$ReportScriptDir = Split-Path -Path $ScriptPath -Parent
$VbsPath = Join-Path -Path $ReportScriptDir -ChildPath "run_hidden.vbs"
try {
Unblock-File -Path $ScriptPath -ErrorAction SilentlyContinue
if (Test-Path $VbsPath) {
Unblock-File -Path $VbsPath -ErrorAction SilentlyContinue
}
Unblock-File -Path $MyInvocation.MyCommand.Path -ErrorAction SilentlyContinue
Write-Host "Files unblocked successfully or unblock attempted."
} catch {
Write-Host "Unable to unblock files. If you see security warnings, you may need to run Unblock-File manually."
}
# Verify the main PowerShell script exists
if (-not (Test-Path $ScriptPath)) {
Write-Error "Main PowerShell script (report.ps1) not found at path: $ScriptPath"
exit 1
}
# Make sure the path to report.ps1 is absolute
$AbsoluteReportScriptPath = (Resolve-Path $ScriptPath).Path
$ResolvedScriptDir = Split-Path -Parent $AbsoluteReportScriptPath
$AbsoluteVbsLauncherPath = Join-Path $ResolvedScriptDir "run_hidden.vbs"
if (-not (Test-Path $AbsoluteVbsLauncherPath)) {
Write-Error "VBS Launcher (run_hidden.vbs) not found in script directory: $ResolvedScriptDir (expected alongside $AbsoluteReportScriptPath)"
exit 1
}
# Task settings
$taskName = "UserActivityTracking"
$taskDescription = "Monitors user activity and reports to central server via VBS launcher"
# Check if the task already exists and remove it
$existingTask = Get-ScheduledTask -TaskName $taskName -ErrorAction SilentlyContinue
if ($existingTask) {
Write-Host "Task '$taskName' already exists. Removing it..."
schtasks /Delete /TN $taskName /F
}
# The command to be executed by the task is the path to the VBS file
$TaskExecutableAndArgs = $AbsoluteVbsLauncherPath
try {
# Use schtasks.exe command to create the task
# /TR now points directly to the .vbs file, which is simpler for schtasks quoting
# Ensure task name and executable path are quoted if they could contain spaces
$createCmd = "schtasks /Create /TN `"$taskName`" /TR `"$TaskExecutableAndArgs`" /SC ONLOGON /F /RU INTERACTIVE /IT /DELAY `"0000:30`""
Write-Host "Creating scheduled task with command: $createCmd"
Invoke-Expression $createCmd
# Verify task was created
$taskExists = schtasks /Query /TN $taskName 2>$null
if ($LASTEXITCODE -eq 0) {
Write-Host "Task '$taskName' has been created successfully."
Write-Host "The activity tracking launcher (run_hidden.vbs) will now run at each user logon."
Write-Host "VBS Launcher path: $AbsoluteVbsLauncherPath"
} else {
Write-Error "Task creation verification failed. The task may not have been created properly."
}
} catch {
Write-Error "Failed to create scheduled task: $_"
exit 1
}
# Copy config.env next to report.ps1 file if it exists in the source deployment directory
$configEnvSource = Join-Path (Split-Path $MyInvocation.MyCommand.Path -Parent) "config.env"
$configEnvDest = Join-Path $ResolvedScriptDir "config.env"
if (Test-Path $configEnvSource) {
if ((Resolve-Path $configEnvSource).Path -ne (Resolve-Path $configEnvDest -ErrorAction SilentlyContinue).Path) {
try {
Copy-Item -Path $configEnvSource -Destination $configEnvDest -Force
Write-Host "Copied config.env to script directory: $configEnvDest"
} catch {
Write-Warning "Failed to copy config.env to script directory: $_"
}
} else {
Write-Host "Config.env is already in the script directory."
}
} else {
Write-Warning "config.env not found in the directory of schedule_task.ps1. Ensure it's manually placed next to report.ps1."
}
Write-Host "`nIMPORTANT: To manually verify the task was created correctly, open Task Scheduler`n(taskschd.msc) and check for the 'UserActivityTracking' task."

View File

@ -0,0 +1,14 @@
CREATE TABLE IF NOT EXISTS user_real_work_summary (
id SERIAL PRIMARY KEY,
username VARCHAR(255) NOT NULL,
work_date DATE NOT NULL,
real_hours_counted INTEGER NOT NULL DEFAULT 0,
last_processed_event_id INTEGER,
last_event_completed_block BOOLEAN NOT NULL DEFAULT FALSE,
CONSTRAINT uq_user_work_date UNIQUE (username, work_date)
);
-- Add any necessary indexes if performance testing indicates a need.
-- For example, an index on (username, work_date) is automatically created by the UNIQUE constraint.
-- An index on last_processed_event_id might be useful if queries filter by it frequently for unprocessed records.
-- CREATE INDEX IF NOT EXISTS idx_user_real_work_summary_last_processed ON user_real_work_summary(last_processed_event_id);

View File

@ -0,0 +1,314 @@
-- Employee Workstation Activity Tracking System
-- Database Schema Creation Script
-- Instructions:
-- 1. For SQLite: sqlite3 work_events.db < create_db.sql
-- 2. For PostgreSQL: psql -U username -d database_name -f create_db.sql
-- Drop existing table if it exists
DROP TABLE IF EXISTS work_events;
-- Create the work_events table
CREATE TABLE work_events (
id SERIAL PRIMARY KEY,
"user" VARCHAR(100) NOT NULL, -- Note the double quotes
state VARCHAR(10) NOT NULL CHECK (state IN ('working', 'stopped')),
ts TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
-- Create indexes for performance optimization
CREATE INDEX idx_work_events_user ON work_events("user");
CREATE INDEX idx_work_events_ts ON work_events(ts);
CREATE INDEX idx_work_events_user_state ON work_events("user", state);
-- Optional: Sample data for testing (comment out in production)
-- Clear previous sample data if necessary before inserting new data
-- DELETE FROM work_events WHERE "user" LIKE 'User %' OR "user" LIKE 'user%';
-- Remove older generic sample data
/*
INSERT INTO work_events ("user", state, ts) VALUES
('user1', 'working', CURRENT_TIMESTAMP - INTERVAL '2 hours'),
('user1', 'stopped', CURRENT_TIMESTAMP - INTERVAL '1 hour'),
('user1', 'working', CURRENT_TIMESTAMP - INTERVAL '30 minutes'),
('user2', 'working', CURRENT_TIMESTAMP - INTERVAL '3 hours'),
('user2', 'stopped', CURRENT_TIMESTAMP - INTERVAL '2 hours 30 minutes'),
('user2', 'working', CURRENT_TIMESTAMP - INTERVAL '2 hours'),
('user2', 'stopped', CURRENT_TIMESTAMP - INTERVAL '1 hour');
*/
-- Example queries for reporting
-- 1. Daily activity summary per user
SELECT
"user",
DATE(ts) AS day,
COUNT(*) AS event_count,
SUM(CASE WHEN state = 'working' THEN 1 ELSE 0 END) AS working_events
FROM work_events
GROUP BY "user", day
ORDER BY day DESC, "user";
-- 2. Weekly activity summary
SELECT
"user",
date_trunc('week', ts) AS week_start,
COUNT(*) AS event_count
FROM work_events
GROUP BY "user", week_start
ORDER BY week_start DESC, "user";
-- 3. Monthly activity summary
SELECT
"user",
date_trunc('month', ts) AS month_start,
COUNT(*) AS event_count
FROM work_events
GROUP BY "user", month_start
ORDER BY month_start DESC, "user";
-- Notes on data retention:
-- According to requirements, raw event logs should be retained for 1 year.
-- Setup a periodic cleanup job to remove old data:
-- PostgreSQL:
DELETE FROM work_events
WHERE ts < CURRENT_TIMESTAMP - INTERVAL '1 year';
-- SQLite:
/*
DELETE FROM work_events
WHERE ts < datetime('now', '-1 year');
*/
-- Enhanced Sample Data Generation (Relative to assumed date 2025-05-04)
-- Ensures Users A-E have data for Today, This Week, and This Month (April)
-- Sample data for real users: Robert, Ilia, and Nika
-- Robert: Working pattern for today, this week, and this month
INSERT INTO work_events ("user", state, ts) VALUES ('Robert', 'working', '2025-05-04 08:30:00'); -- Today
INSERT INTO work_events ("user", state, ts) VALUES ('Robert', 'stopped', '2025-05-04 12:30:00');
INSERT INTO work_events ("user", state, ts) VALUES ('Robert', 'working', '2025-05-04 13:15:00');
INSERT INTO work_events ("user", state, ts) VALUES ('Robert', 'stopped', '2025-05-04 17:00:00');
INSERT INTO work_events ("user", state, ts) VALUES ('Robert', 'working', '2025-05-02 08:45:00'); -- This Week
INSERT INTO work_events ("user", state, ts) VALUES ('Robert', 'stopped', '2025-05-02 16:15:00');
INSERT INTO work_events ("user", state, ts) VALUES ('Robert', 'working', '2025-04-15 09:00:00'); -- This Month
INSERT INTO work_events ("user", state, ts) VALUES ('Robert', 'stopped', '2025-04-15 17:30:00');
-- Ilia: Working pattern for today, this week, and this month
INSERT INTO work_events ("user", state, ts) VALUES ('Ilia', 'working', '2025-05-04 09:00:00'); -- Today
INSERT INTO work_events ("user", state, ts) VALUES ('Ilia', 'stopped', '2025-05-04 13:00:00');
INSERT INTO work_events ("user", state, ts) VALUES ('Ilia', 'working', '2025-05-04 14:00:00');
INSERT INTO work_events ("user", state, ts) VALUES ('Ilia', 'stopped', '2025-05-04 18:00:00');
INSERT INTO work_events ("user", state, ts) VALUES ('Ilia', 'working', '2025-05-03 09:30:00'); -- This Week
INSERT INTO work_events ("user", state, ts) VALUES ('Ilia', 'stopped', '2025-05-03 17:30:00');
INSERT INTO work_events ("user", state, ts) VALUES ('Ilia', 'working', '2025-04-20 08:30:00'); -- This Month
INSERT INTO work_events ("user", state, ts) VALUES ('Ilia', 'stopped', '2025-04-20 16:30:00');
-- Nika: Working pattern for today, this week, and this month
INSERT INTO work_events ("user", state, ts) VALUES ('Nika', 'working', '2025-05-04 08:15:00'); -- Today
INSERT INTO work_events ("user", state, ts) VALUES ('Nika', 'stopped', '2025-05-04 12:00:00');
INSERT INTO work_events ("user", state, ts) VALUES ('Nika', 'working', '2025-05-04 12:45:00');
INSERT INTO work_events ("user", state, ts) VALUES ('Nika', 'stopped', '2025-05-04 16:30:00');
INSERT INTO work_events ("user", state, ts) VALUES ('Nika', 'working', '2025-05-01 08:30:00'); -- This Week
INSERT INTO work_events ("user", state, ts) VALUES ('Nika', 'stopped', '2025-05-01 16:45:00');
INSERT INTO work_events ("user", state, ts) VALUES ('Nika', 'working', '2025-04-10 09:15:00'); -- This Month
INSERT INTO work_events ("user", state, ts) VALUES ('Nika', 'stopped', '2025-04-10 17:15:00');
-- Enhanced SQL queries for reporting with duration calculations
-- 4. Daily working duration per user (for dashboard)
-- This calculates the total working hours per day by matching working->stopped pairs
SELECT
w1."user",
DATE(w1.ts) AS day,
SUM(
EXTRACT(EPOCH FROM (w2.ts - w1.ts))/3600
) AS duration_hours,
MIN(CASE WHEN w1.state = 'working' THEN w1.ts END) AS first_login_time
FROM
work_events w1
JOIN
work_events w2 ON w1."user" = w2."user"
AND w1.state = 'working'
AND w2.state = 'stopped'
AND w2.ts > w1.ts
AND NOT EXISTS (
SELECT 1 FROM work_events w3
WHERE w3."user" = w1."user"
AND w3.ts > w1.ts AND w3.ts < w2.ts
)
GROUP BY
w1."user", day
ORDER BY
day DESC, w1."user";
-- 5. Weekly working duration per user (for dashboard)
SELECT
w1."user",
date_trunc('week', w1.ts) AS week_start,
SUM(
EXTRACT(EPOCH FROM (w2.ts - w1.ts))/3600
) AS duration_hours,
MIN(CASE WHEN w1.state = 'working' THEN w1.ts END) AS first_login_time
FROM
work_events w1
JOIN
work_events w2 ON w1."user" = w2."user"
AND w1.state = 'working'
AND w2.state = 'stopped'
AND w2.ts > w1.ts
AND NOT EXISTS (
SELECT 1 FROM work_events w3
WHERE w3."user" = w1."user"
AND w3.ts > w1.ts AND w3.ts < w2.ts
)
GROUP BY
w1."user", week_start
ORDER BY
week_start DESC, w1."user";
-- 6. Monthly working duration per user (for dashboard)
SELECT
w1."user",
date_trunc('month', w1.ts) AS month_start,
SUM(
EXTRACT(EPOCH FROM (w2.ts - w1.ts))/3600
) AS duration_hours,
MIN(CASE WHEN w1.state = 'working' THEN w1.ts END) AS first_login_time
FROM
work_events w1
JOIN
work_events w2 ON w1."user" = w2."user"
AND w1.state = 'working'
AND w2.state = 'stopped'
AND w2.ts > w1.ts
AND NOT EXISTS (
SELECT 1 FROM work_events w3
WHERE w3."user" = w1."user"
AND w3.ts > w1.ts AND w3.ts < w2.ts
)
GROUP BY
w1."user", month_start
ORDER BY
month_start DESC, w1."user";
-- 7. Detailed user activity log for specific user and date range
-- This query shows all work sessions with start, end times and duration
-- Use :username and :start_date, :end_date as parameters
SELECT
w1."user",
DATE(w1.ts) AS work_date,
w1.ts AS start_time,
w2.ts AS end_time,
EXTRACT(EPOCH FROM (w2.ts - w1.ts))/3600 AS session_duration_hours
FROM
work_events w1
JOIN
work_events w2 ON w1."user" = w2."user"
AND w1.state = 'working'
AND w2.state = 'stopped'
AND w2.ts > w1.ts
AND NOT EXISTS (
SELECT 1 FROM work_events w3
WHERE w3."user" = w1."user"
AND w3.ts > w1.ts AND w3.ts < w2.ts
)
WHERE
w1."user" = :username
AND DATE(w1.ts) BETWEEN :start_date AND :end_date
ORDER BY
w1.ts;
-- 8. Filter for specific time periods (Today, This Week, This Month)
-- Today's work duration by user
SELECT
w1."user",
CURRENT_DATE AS day,
SUM(
EXTRACT(EPOCH FROM (w2.ts - w1.ts))/3600
) AS duration_hours,
MIN(CASE WHEN w1.state = 'working' THEN w1.ts END) AS first_login_time
FROM
work_events w1
JOIN
work_events w2 ON w1."user" = w2."user"
AND w1.state = 'working'
AND w2.state = 'stopped'
AND w2.ts > w1.ts
AND NOT EXISTS (
SELECT 1 FROM work_events w3
WHERE w3."user" = w1."user"
AND w3.ts > w1.ts AND w3.ts < w2.ts
)
WHERE
DATE(w1.ts) = CURRENT_DATE
GROUP BY
w1."user"
ORDER BY
w1."user";
-- This Week's work duration by user
SELECT
w1."user",
date_trunc('week', CURRENT_DATE) AS week_start,
SUM(
EXTRACT(EPOCH FROM (w2.ts - w1.ts))/3600
) AS duration_hours,
MIN(CASE WHEN w1.state = 'working' THEN w1.ts END) AS first_login_time
FROM
work_events w1
JOIN
work_events w2 ON w1."user" = w2."user"
AND w1.state = 'working'
AND w2.state = 'stopped'
AND w2.ts > w1.ts
AND NOT EXISTS (
SELECT 1 FROM work_events w3
WHERE w3."user" = w1."user"
AND w3.ts > w1.ts AND w3.ts < w2.ts
)
WHERE
date_trunc('week', w1.ts) = date_trunc('week', CURRENT_DATE)
GROUP BY
w1."user"
ORDER BY
w1."user";
-- This Month's work duration by user
SELECT
w1."user",
date_trunc('month', CURRENT_DATE) AS month_start,
SUM(
EXTRACT(EPOCH FROM (w2.ts - w1.ts))/3600
) AS duration_hours,
MIN(CASE WHEN w1.state = 'working' THEN w1.ts END) AS first_login_time
FROM
work_events w1
JOIN
work_events w2 ON w1."user" = w2."user"
AND w1.state = 'working'
AND w2.state = 'stopped'
AND w2.ts > w1.ts
AND NOT EXISTS (
SELECT 1 FROM work_events w3
WHERE w3."user" = w1."user"
AND w3.ts > w1.ts AND w3.ts < w2.ts
)
WHERE
date_trunc('month', w1.ts) = date_trunc('month', CURRENT_DATE)
GROUP BY
w1."user"
ORDER BY
w1."user";

26
ecosystem.config.js Normal file
View File

@ -0,0 +1,26 @@
module.exports = {
apps: [
{
name: "work-tracing",
script: "./start_app.sh",
args: "prod",
interpreter: "bash",
env: {
// Environment variables will be loaded from config.env by the application
NODE_ENV: "production",
// These env vars are for PM2's own management, not for the app
PM2_SERVE_PORT: 5050,
PM2_SERVE_HOST: "0.0.0.0"
},
// Additional PM2 settings for better management
watch: false,
max_memory_restart: "300M",
restart_delay: 3000,
max_restarts: 10,
// Optional logging configuration
error_file: "./logs/pm2-error.log",
out_file: "./logs/pm2-out.log",
merge_logs: true
},
],
};

85
project_config.md Normal file
View File

@ -0,0 +1,85 @@
### Project Configuration (LTM)
This file contains the stable, long-term context for the project. It should be updated infrequently, primarily when core goals, tech, or patterns change.
## Core Goal
Implement an automated employee workstation activity tracking system that logs user login events and active/inactive periods on Windows machines, reports these events to a central Flask API, and provides a dashboard with daily, weekly, and monthly working time summaries.
## Tech Stack
* **Client**:
* OS: Windows 10/11
* Scripting: PowerShell
* Scheduler: Windows Task Scheduler
* **Server**:
* Language: Python 3.9+
* Framework: Flask
* ORM: SQLAlchemy
* Database: SQLite (development) / PostgreSQL (production)
* Server: Gunicorn / `flask run`
* **Frontend**:
* HTML, CSS, JavaScript
* Charting Library: Chart.js
* **Infrastructure**:
* Host: On-premises LAN server
* Networking: HTTP on corporate LAN (no external exposure)
* **DevOps & Tooling**:
* Version Control: Git
* Testing: pytest
* Database Migrations: Alembic (for PostgreSQL)
* Linting/Formatting: Flake8, Black, isort
## Critical Patterns & Conventions
* **API Design**:
* RESTful endpoint: `POST /api/report`
* Request payload: JSON with fields `user`, `state`, `ts` (ISO 8601)
* Response format: JSON `{ "success": bool, "message"?: "error details" }`, HTTP 201 on success
* **Database Model**:
* Table `work_events` with columns:
* `id` (PK, auto-increment)
* `user` (VARCHAR)
* `state` (VARCHAR, e.g., "working"/"stopped")
* `ts` (DATETIME, default now)
* **Idle Detection Logic**:
* Fixed inactivity threshold of 10 minutes
* State transitions: working → stopped on crossing threshold, stopped → working on activity
* **Error Handling**:
* **PowerShell**: Wrap API calls in `try/catch`, log failures, optional retry
* **Flask**: Global error handlers returning structured JSON errors
* **Configuration Management**:
* Environment variables or `.env` file for API server URL, port, and DB connection string
* **Coding Standards**:
* **Python**: PEP 8, docstrings for modules and functions
* **PowerShell**: Verb-Noun function names, consistent two-space indentation
* **JavaScript**: ES6+ syntax, modular code, avoid globals
* **Commit Messages**:
* Use Conventional Commits format (e.g., `feat: add idle detection logic`)
## Key Constraints
* **Client Runtime**: Must rely solely on built-in PowerShell and Task Scheduler (no external installs)
* **Idle Threshold**: Exactly 10 minutes of inactivity
* **Authentication**: No API auth; system is LAN-restricted
* **Deployment**: Server must run on-premises within corporate LAN
* **Data Retention**: Raw event logs retained for 1 year; aggregated summaries retained indefinitely
* **Performance**: Support up to 100 client reports per minute without degradation
## Tokenization Settings
Estimation Method: Character-based
Characters Per Token (Estimate): 4

18
requirements.txt Normal file
View File

@ -0,0 +1,18 @@
Flask==2.3.3
Flask-SQLAlchemy==3.1.1
SQLAlchemy==2.0.20
gunicorn==21.2.0
python-dotenv==1.0.0
psycopg2-binary==2.9.7
pytest==7.4.0
black==23.7.0
flake8==6.1.0
isort==5.12.0
alembic==1.12.0
Werkzeug==2.3.7
Jinja2==3.1.2
itsdangerous==2.1.2
click==8.1.7
requests
APScheduler==3.10.4
Flask-APScheduler==1.12.4

76
run.py Normal file
View File

@ -0,0 +1,76 @@
"""
Entry point for the Employee Workstation Activity Tracking application.
This script initializes the Flask application and runs the development server.
"""
import os
# Removed unused import sys
from app import create_app, init_db
from sqlalchemy.exc import SQLAlchemyError # Import SQLAlchemyError
# Create the application instance using the factory function
app = create_app()
if __name__ == "__main__":
# Print some diagnostic information
# Consider using app.logger for some of these if appropriate after app init
app.logger.info("Starting application via run.py...") # Changed to app.logger
app.logger.info(f"Instance path: {app.instance_path}") # Changed to app.logger
app.logger.info(
f"Instance path exists: {os.path.exists(app.instance_path)}"
) # Changed to app.logger
# Make sure the instance directory exists
if not os.path.exists(app.instance_path):
try:
os.makedirs(app.instance_path)
app.logger.info(
f"Created instance directory: {app.instance_path}"
) # Changed to app.logger
except OSError as e: # Changed from Exception to OSError
app.logger.error(
f"Error creating instance directory: {e}"
) # Changed to app.logger
# Check instance directory permissions
has_write_access = os.access(app.instance_path, os.W_OK)
app.logger.info(
f"Instance path write access: {has_write_access}"
) # Changed to app.logger
# Print database configuration
db_uri = app.config["SQLALCHEMY_DATABASE_URI"]
masked_uri = db_uri
if "@" in db_uri:
parts = db_uri.split("@")
masked_uri = "****@" + parts[1]
app.logger.info(f"Database URI: {masked_uri}") # Changed to app.logger
# Initialize the database if needed
try:
app.logger.info("Initializing database from run.py...") # Changed to app.logger
init_db(app) # Pass the app instance
app.logger.info(
"Database initialization successful from run.py"
) # Changed to app.logger
except SQLAlchemyError as db_e: # Specific for DB issues
app.logger.error(f"Database error during initialization from run.py: {db_e}")
# Optionally log full traceback for SQLAlchemyErrors if they are complex
# import traceback
# app.logger.error(traceback.format_exc())
except Exception as e: # For other unexpected errors during init
app.logger.error(
f"Unexpected error during database initialization from run.py: {e}"
) # Changed to app.logger
import traceback
# Run the Flask application
host = os.environ.get("HOST", "0.0.0.0")
port = int(os.environ.get("PORT", 5000))
debug = os.environ.get("DEBUG", "False").lower() == "true"
app.logger.info(
f"Starting Flask application on {host}:{port} (debug={debug})"
) # Changed to app.logger
app.run(host=host, port=port, debug=debug)

61
start_app.sh Executable file
View File

@ -0,0 +1,61 @@
#!/bin/bash
# Enhanced Start Script for Employee Workstation Activity Tracking System
# Function to display error and exit
error_exit() {
echo "ERROR: $1" >&2
exit 1
}
# Check if run.py exists
if [ ! -f "run.py" ]; then
error_exit "run.py not found in the current directory. Please ensure you are in the project root."
fi
# Check for virtual environment and activate
if [ -d "venv" ]; then
echo "Activating virtual environment..."
source venv/bin/activate
else
echo "WARNING: Virtual environment 'venv' not found. Attempting to run without it."
echo "It is highly recommended to create and use a virtual environment."
echo "You can create one with: python -m venv venv"
fi
# Determine mode (development or production)
MODE=${1:-dev} # Default to 'dev' if no argument is provided
if [[ "$MODE" == "dev" || "$MODE" == "development" ]]; then
echo "Starting application in DEVELOPMENT mode..."
echo "--- Python Diagnostics ---"
echo "Which python: $(which python)"
echo "Which python3: $(which python3)"
echo "PYTHONPATH: $PYTHONPATH"
ACTIVE_PYTHON_VERSION=$(python --version 2>&1 || python3 --version 2>&1)
echo "Active Python version: $ACTIVE_PYTHON_VERSION"
echo "Attempting to import dotenv directly via command line..."
python -c "import dotenv; print('dotenv imported successfully. Path:', dotenv.__file__)" || python3 -c "import dotenv; print('dotenv imported successfully. Path:', dotenv.__file__)"
echo "--- End Python Diagnostics ---"
if command -v python &> /dev/null; then
python run.py
elif command -v python3 &> /dev/null; then
python3 run.py
else
error_exit "Python interpreter (python or python3) not found. Please ensure Python is installed and in your PATH."
fi
elif [[ "$MODE" == "prod" || "$MODE" == "production" ]]; then
echo "Starting application in PRODUCTION mode using Gunicorn on port 5050..."
if command -v gunicorn &> /dev/null; then
# Check if Gunicorn is installed in the venv or globally
gunicorn -w 4 -b 0.0.0.0:5050 "app:create_app()"
else
error_exit "Gunicorn not found. Please install it: pip install gunicorn"
fi
else
echo "Invalid mode specified: $MODE"
echo "Usage: $0 [dev|development|prod|production]"
error_exit "Please specify a valid mode."
fi

101
static/css/dashboard.css Normal file
View File

@ -0,0 +1,101 @@
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
background-color: #f8f9fa;
padding: 20px;
}
.card {
box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
margin-bottom: 20px;
}
.card-header {
background-color: #e9ecef; /* Slightly darker header */
font-weight: 600;
}
.btn-filter {
margin-right: 5px;
}
.table th {
background-color: #e9ecef; /* Match card header */
font-weight: 600;
}
#reportTable th:nth-child(1), /* User column */
#reportTable td:nth-child(1) {
width: 20%;
}
#reportTable th:nth-child(2), /* Period column */
#reportTable td:nth-child(2) {
width: 25%;
}
#reportTable th:nth-child(3), /* Duration column */
#reportTable td:nth-child(3) {
width: 15%;
text-align: center;
}
#reportTable th:nth-child(4), /* State column */
#reportTable td:nth-child(4) {
width: 15%;
text-align: center;
}
.spinner-border {
width: 1.5rem;
height: 1.5rem;
}
.user-link {
color: #0d6efd;
text-decoration: none;
cursor: pointer;
}
.user-link:hover {
text-decoration: underline;
}
.modal-body table th:nth-child(1) {
width: 25%;
}
.modal-body table th:nth-child(2),
.modal-body table th:nth-child(3) {
width: 30%;
}
.modal-body table th:nth-child(4),
.modal-body table td:nth-child(4) {
width: 15%;
text-align: center;
}
/* Sortable columns */
.sortable {
cursor: pointer;
position: relative;
}
.sortable:hover {
background-color: #dee2e6;
}
.sort-icon {
font-size: 0.8rem;
margin-left: 5px;
opacity: 0.5;
}
.sortable[data-dir="asc"] .sort-icon {
opacity: 1;
}
.sortable[data-dir="desc"] .sort-icon {
opacity: 1;
transform: rotate(180deg);
}
/* Styles for specific element widths previously inline */
#weekDaySelect,
#dateSelector {
width: auto; /* Moved from inline style */
}
#userFilterInputGroup {
width: 220px; /* Moved from inline style */
}
.toast-container-custom {
z-index: 1100; /* Bootstrap's default for fixed elements like navbars is 1030, modals are 1050+. Toasts should be high. */
}

BIN
static/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

191
static/js/api.js Normal file
View File

@ -0,0 +1,191 @@
/**
* api.js - API communication module
*
* This module handles all communication with the server API endpoints.
*/
import AppState from './state.js';
import { showLogMessage } from './uiHelpers.js';
import * as DateUtils from './dateUtils.js'; // Import all of DateUtils
/**
* Load report data from the server (now fetches real_work_hours)
* @param {boolean} isAutoRefresh - Whether this is an auto-refresh call
* @returns {Promise<Array>} - Promise resolving to report data
*/
export async function loadReportData(isAutoRefresh = false) {
const currentPeriod = AppState.getCurrentPeriod();
const user = AppState.getUserFilterText();
let apiUrl = '/api/reports/real_work_hours'; // New endpoint
const params = new URLSearchParams();
if (user) {
params.append('username', user);
}
// Add date parameters based on the current period
let startDate, endDate;
if (currentPeriod === 'daily') {
startDate = AppState.getSelectedDate();
endDate = startDate; // For daily, start and end date are the same
params.append('start_date', startDate);
params.append('end_date', endDate);
} else if (currentPeriod === 'weekly') {
if (AppState.getSelectedWeekDay()) {
startDate = AppState.getSelectedWeekDay();
endDate = startDate; // If a specific day in week is chosen, treat as daily
} else {
// Get current week's start (Monday) and end (Sunday)
// Assuming AppState.getSelectedDate() gives a date within the desired week if not today
const refDate = AppState.getSelectedDate() ? new Date(AppState.getSelectedDate()) : new Date();
startDate = DateUtils.getFirstDayOfWeek(refDate);
endDate = DateUtils.getLastDayOfWeek(refDate);
}
params.append('start_date', startDate);
params.append('end_date', endDate);
} else if (currentPeriod === 'monthly') {
// Get current month's start and end dates
// Assuming AppState.getSelectedDate() is not used for month context, so always current month
startDate = DateUtils.getFirstDayOfCurrentMonth();
endDate = DateUtils.getLastDayOfCurrentMonth();
params.append('start_date', startDate);
params.append('end_date', endDate);
}
// If no specific period logic matched or for a general overview (if API supports it without dates)
// we might not append dates, or the API endpoint could have defaults.
// For this implementation, daily, weekly, monthly will always have date params.
if (params.toString()) {
apiUrl += `?${params.toString()}`;
}
try {
const response = await fetch(apiUrl);
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const result = await response.json();
if (result.success && result.data) {
// Store the data in app state
AppState.setReportData(result.data);
if (isAutoRefresh) {
showLogMessage('Report data refreshed automatically');
}
return result.data;
} else {
throw new Error(result.message || 'Failed to load data.');
}
} catch (error) {
console.error('Error fetching report data:', error);
throw error;
}
}
/**
* Fetch current user states from the server
* @param {boolean} forceRefresh - Whether to force a refresh even if the cache is valid
* @returns {Promise<Object>} - Promise resolving to user states object
*/
export async function fetchUserStates(forceRefresh = false) {
// Only update user states if cache expired or force refresh
const now = Date.now();
if (!forceRefresh &&
now - AppState.getUserStatesLastUpdate() < AppState.getStateCacheDuration() &&
Object.keys(AppState.getUserStates()).length > 0) {
return AppState.getUserStates();
}
try {
const response = await fetch('/api/user-states');
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const result = await response.json();
if (result.success && result.data) {
const prevStates = {...AppState.getUserStates()};
AppState.setUserStates(result.data);
// If forcing a refresh, log the change statistics
if (forceRefresh) {
const totalUsers = Object.keys(result.data).length;
const workingUsers = Object.values(result.data).filter(state => state === 'working').length;
const changedStates = countChangedStates(prevStates, result.data);
// Only show message if it's a manual refresh or if there were changes
// Also check if AppState.isAutoRefreshEnabled is a function before calling
const autoRefreshEnabled = (AppState && typeof AppState.isAutoRefreshEnabled === 'function') ? AppState.isAutoRefreshEnabled() : true;
if (!autoRefreshEnabled || changedStates > 0) {
showLogMessage(`States refreshed: ${totalUsers} users (${workingUsers} working). ${changedStates} state changes detected.`);
}
}
return result.data;
} else {
// If result.success is false, throw an error with the message from the API
throw new Error(result.message || 'Failed to fetch user states: API indicated failure.');
}
} catch (error) {
console.error('Error fetching user states:', error);
// Optionally, clear cached states on error or handle more gracefully
// For now, re-throw to let the caller handle it or display a general error.
throw error;
}
}
/**
* Load user activity data for a specific user
* @param {string} username - Username to fetch activity for
* @param {string} startDate - Start date in YYYY-MM-DD format
* @param {string} endDate - End date in YYYY-MM-DD format
* @returns {Promise<Array>} - Promise resolving to user activity data
*/
export async function loadUserActivityData(username, startDate, endDate) {
try {
const response = await fetch(`/api/user-activity/${encodeURIComponent(username)}?start_date=${startDate}&end_date=${endDate}`);
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
const result = await response.json();
if (result.success && result.data) {
return result.data.activities;
} else {
throw new Error(result.message || 'Failed to load activity data');
}
} catch (error) {
console.error('Error fetching user activity data:', error);
throw error;
}
}
/**
* Helper function to count changed states between refreshes
* @param {Object} prevStates - Previous state object
* @param {Object} newStates - New state object
* @returns {number} - Number of state changes
*/
function countChangedStates(prevStates, newStates) {
let count = 0;
// Check for changes in existing users
Object.keys(newStates).forEach(user => {
if (prevStates[user] && prevStates[user] !== newStates[user]) {
count++;
console.log(`State change detected: ${user} changed from ${prevStates[user]} to ${newStates[user]}`);
}
});
// Check for new users
const newUsers = Object.keys(newStates).filter(user => !prevStates[user]);
if (newUsers.length > 0) {
count += newUsers.length;
console.log(`New users detected: ${newUsers.join(', ')}`);
}
return count;
}

209
static/js/autoRefresh.js Normal file
View File

@ -0,0 +1,209 @@
/**
* autoRefresh.js - Auto-refresh functionality
*
* This module handles the automatic refreshing of dashboard data.
*/
import AppState from './state.js';
import { loadReportData } from './api.js';
import { fetchUserStates } from './api.js';
import { showLogMessage } from './uiHelpers.js';
/**
* Toggle auto-refresh on/off
*/
export function toggleAutoRefresh() {
const autoRefreshBtn = document.getElementById('autoRefreshBtn');
const autoRefreshIndicator = document.getElementById('autoRefreshIndicator');
if (!AppState || typeof AppState.isAutoRefreshEnabled !== 'function' || typeof AppState.setAutoRefreshEnabled !== 'function' || typeof AppState.getAutoRefreshInterval !== 'function') {
console.error("AppState or its methods are not available in toggleAutoRefresh.");
return;
}
if (!autoRefreshBtn || !autoRefreshIndicator) {
console.error("UI elements for auto-refresh not found in toggleAutoRefresh.");
return;
}
const currentState = AppState.isAutoRefreshEnabled();
AppState.setAutoRefreshEnabled(!currentState);
if (AppState.isAutoRefreshEnabled()) {
// Enable auto-refresh
startAutoRefresh();
autoRefreshBtn.innerHTML = '<i class="bi bi-pause-circle"></i> Pause Auto-refresh';
autoRefreshIndicator.classList.remove('d-none');
showLogMessage('Auto-refresh enabled. Dashboard will update every ' +
(AppState.getAutoRefreshInterval()/1000) + ' seconds.');
} else {
// Disable auto-refresh
stopAutoRefresh();
autoRefreshBtn.innerHTML = '<i class="bi bi-play-circle"></i> Resume Auto-refresh';
autoRefreshIndicator.classList.add('d-none');
showLogMessage('Auto-refresh paused. Manual refresh required.');
}
}
/**
* Start auto-refresh timer
*/
export function startAutoRefresh() {
if (!AppState || typeof AppState.getAutoRefreshTimer !== 'function' || typeof AppState.setAutoRefreshTimer !== 'function' || typeof AppState.getAutoRefreshInterval !== 'function' || typeof AppState.setLastRefreshTime !== 'function') {
console.error("AppState or its methods are not available in startAutoRefresh.");
return;
}
if (AppState.getAutoRefreshTimer()) {
clearInterval(AppState.getAutoRefreshTimer());
}
// Start a new timer
const timer = setInterval(performAutoRefresh, AppState.getAutoRefreshInterval());
AppState.setAutoRefreshTimer(timer);
AppState.setLastRefreshTime(Date.now());
updateRefreshIndicator();
// Start updating the indicator
startRefreshCountdown();
}
/**
* Stop auto-refresh timer
*/
export function stopAutoRefresh() {
if (!AppState || typeof AppState.getAutoRefreshTimer !== 'function' || typeof AppState.setAutoRefreshTimer !== 'function') {
console.error("AppState or its methods are not available in stopAutoRefresh.");
return;
}
if (AppState.getAutoRefreshTimer()) {
clearInterval(AppState.getAutoRefreshTimer());
AppState.setAutoRefreshTimer(null);
}
// Stop updating the indicator
stopRefreshCountdown();
}
/**
* Perform the actual refresh
*/
export async function performAutoRefresh() {
console.log('Auto-refreshing dashboard...');
if (AppState && typeof AppState.setLastRefreshTime === 'function') {
AppState.setLastRefreshTime(Date.now());
} else {
console.error("AppState.setLastRefreshTime is not available in performAutoRefresh.");
}
try {
// First update user states
await fetchUserStates(true);
// Then reload report data
await loadReportData(true); // Pass true to indicate it's an auto-refresh
showLogMessage('Dashboard auto-refreshed successfully');
} catch (error) {
console.error('Auto-refresh error:', error);
showLogMessage('Auto-refresh error: ' + error.message);
}
}
/**
* Update refresh indicator display
*/
export function updateRefreshIndicator() {
const autoRefreshIndicator = document.getElementById('autoRefreshIndicator');
if (!AppState || typeof AppState.isAutoRefreshEnabled !== 'function' || typeof AppState.getLastRefreshTime !== 'function' || typeof AppState.getAutoRefreshInterval !== 'function') {
console.error("AppState or its methods are not available in updateRefreshIndicator.");
return;
}
if (!autoRefreshIndicator) {
// If indicator is not present, just return silently. It might not be on all pages/views.
return;
}
if (!AppState.isAutoRefreshEnabled()) return;
const elapsedSeconds = Math.floor((Date.now() - AppState.getLastRefreshTime()) / 1000);
const remainingSeconds = Math.max(0, Math.floor(AppState.getAutoRefreshInterval() / 1000) - elapsedSeconds);
autoRefreshIndicator.textContent = 'Auto-refresh in ' + remainingSeconds + 's';
}
/**
* Start refresh countdown timer
*/
export function startRefreshCountdown() {
if (!AppState || typeof AppState.getCountdownTimer !== 'function' || typeof AppState.setCountdownTimer !== 'function') {
console.error("AppState or its methods are not available in startRefreshCountdown.");
return;
}
if (AppState.getCountdownTimer()) {
clearInterval(AppState.getCountdownTimer());
}
// Update immediately, then start the interval
updateRefreshIndicator();
const timer = setInterval(updateRefreshIndicator, 1000);
AppState.setCountdownTimer(timer);
}
/**
* Stop refresh countdown timer
*/
export function stopRefreshCountdown() {
if (!AppState || typeof AppState.getCountdownTimer !== 'function' || typeof AppState.setCountdownTimer !== 'function') {
console.error("AppState or its methods are not available in stopRefreshCountdown.");
return;
}
if (AppState.getCountdownTimer()) {
clearInterval(AppState.getCountdownTimer());
AppState.setCountdownTimer(null);
}
}
/**
* Initialize auto-refresh functionality
*/
export function initializeAutoRefresh() {
const autoRefreshBtn = document.getElementById('autoRefreshBtn');
const autoRefreshIndicator = document.getElementById('autoRefreshIndicator');
if (!AppState || typeof AppState.isAutoRefreshEnabled !== 'function' || typeof AppState.getAutoRefreshInterval !== 'function') {
console.error("AppState or its methods are not available for initializing auto-refresh.");
// Attempt to gracefully handle if AppState is not ready, maybe auto-refresh remains off.
if (autoRefreshBtn) autoRefreshBtn.innerHTML = '<i class="bi bi-play-circle"></i> Resume Auto-refresh';
if (autoRefreshIndicator) autoRefreshIndicator.classList.add('d-none');
return;
}
// Set up event listener for the auto-refresh button
if (autoRefreshBtn) {
autoRefreshBtn.addEventListener('click', toggleAutoRefresh);
// Synchronize button text/icon with initial state
if (AppState.isAutoRefreshEnabled()) {
autoRefreshBtn.innerHTML = '<i class="bi bi-pause-circle"></i> Pause Auto-refresh';
} else {
autoRefreshBtn.innerHTML = '<i class="bi bi-play-circle"></i> Resume Auto-refresh';
}
}
// Initialize auto-refresh indicator
if (autoRefreshIndicator) {
if (AppState.isAutoRefreshEnabled()) {
autoRefreshIndicator.classList.remove('d-none');
} else {
autoRefreshIndicator.classList.add('d-none');
}
}
// Start auto-refresh if enabled
if (AppState.isAutoRefreshEnabled()) {
startAutoRefresh();
}
}

303
static/js/dashboard.js Normal file
View File

@ -0,0 +1,303 @@
/**
* dashboard.js - Main dashboard controller
*
* This module serves as the main entry point and orchestrator for the dashboard,
* importing and initializing all the specialized modules.
*/
// Import all the modules
import AppState from './state.js';
import * as DateUtils from './dateUtils.js';
import * as Filters from './filters.js';
import * as TableManager from './tableManager.js';
import * as UserStates from './userStates.js';
import * as AutoRefresh from './autoRefresh.js';
import * as UserActivity from './userActivity.js';
import * as Api from './api.js';
// Main initialization function
document.addEventListener('DOMContentLoaded', function() {
// DOM element references
const filterButtons = document.querySelectorAll('.btn-filter');
const userFilterInput = document.getElementById('userFilter');
const clearUserFilterBtn = document.getElementById('clearUserFilter');
const loadingSpinner = document.getElementById('loadingSpinner');
const errorMessage = document.getElementById('errorMessage');
const weekDaySelector = document.getElementById('weekDaySelector');
const weekDaySelect = document.getElementById('weekDaySelect');
const refreshStatesBtn = document.getElementById('refreshStates');
const dateNavControls = document.getElementById('dateNavControls');
const prevDateBtn = document.getElementById('prevDateBtn');
const nextDateBtn = document.getElementById('nextDateBtn');
const calendarBtn = document.getElementById('calendarBtn');
const dateSelector = document.getElementById('dateSelector');
// Set initial value for date selector if AppState and getSelectedDate are defined
if (AppState && typeof AppState.getSelectedDate === 'function') {
const selectedDate = AppState.getSelectedDate();
if (selectedDate) {
dateSelector.value = selectedDate;
}
}
// Initialize all components
initializeNavigation();
initializeFilters();
initializeWeekDaySelector();
// Initialize sorting, state filters, and auto-refresh
Filters.initializeSortingHeaders();
Filters.initializeStateFilters();
AutoRefresh.initializeAutoRefresh();
// Set initial sort indicator based on AppState default sort settings
setInitialSortIndicator();
// Initialize user activity modal
UserActivity.initializeUserActivityModal();
// First load user states, then load report data
UserStates.refreshUserStates(true).then(() => {
// Load initial report data
loadReportData();
}).catch(error => {
console.error('Error loading initial user states:', error);
// Still load report data even if states fail.
loadReportData();
});
/**
* Set the initial sort indicator on the table header based on AppState default values
*/
function setInitialSortIndicator() {
const sortColumn = AppState.getCurrentSortColumn();
const sortDirection = AppState.getCurrentSortDirection();
if (sortColumn && sortDirection) {
const header = document.querySelector(`.sortable[data-sort="${sortColumn}"]`);
if (header) {
header.setAttribute('data-dir', sortDirection);
}
}
}
/**
* Updates the visibility of UI controls based on the selected reporting period.
* @param {string} period - The current reporting period ('daily', 'weekly', 'monthly').
*/
function updatePeriodControlsVisibility(period) {
if (period === 'weekly') {
weekDaySelector.classList.remove('d-none');
populateWeekDaySelector(); // Populate selector when shown
dateNavControls.classList.add('d-none');
} else if (period === 'daily') {
weekDaySelector.classList.add('d-none');
dateNavControls.classList.remove('d-none');
updateDateDisplay(); // Update date display for daily view
} else { // monthly or any other period
weekDaySelector.classList.add('d-none');
dateNavControls.classList.add('d-none');
}
}
/**
* Initialize date navigation controls.
* Sets up event listeners and initial visibility.
*/
function initializeNavigation() {
// Update the date display
updateDateDisplay();
// Show date navigation controls if daily view is active
if (AppState.getCurrentPeriod() === 'daily') {
dateNavControls.classList.remove('d-none');
}
// Add event listeners for date navigation
prevDateBtn.addEventListener('click', goToPreviousDay);
nextDateBtn.addEventListener('click', goToNextDay);
calendarBtn.addEventListener('click', toggleCalendar);
dateSelector.addEventListener('change', handleDateSelection);
}
/**
* Initialize filter buttons (Daily, Weekly, Monthly) and the user text filter.
* Sets up event listeners for period changes and user input.
*/
function initializeFilters() {
// Add event listeners to filter buttons
filterButtons.forEach(button => {
button.addEventListener('click', function() {
filterButtons.forEach(btn => btn.classList.remove('active'));
this.classList.add('active');
const period = this.getAttribute('data-period');
AppState.setCurrentPeriod(period);
// Update visibility of period-specific controls
updatePeriodControlsVisibility(period);
loadReportData(); // Reload data when period changes
});
});
// User filter input (with debounce)
let userFilterTimeout = null;
userFilterInput.addEventListener('input', function() {
// Clear any existing timeout
if (userFilterTimeout) {
clearTimeout(userFilterTimeout);
}
// Set a new timeout to delay the filter application (300ms debounce)
userFilterTimeout = setTimeout(function() {
AppState.setUserFilterText(userFilterInput.value.trim());
loadReportData();
}, 300);
});
// Clear user filter button
clearUserFilterBtn.addEventListener('click', function() {
userFilterInput.value = '';
AppState.setUserFilterText('');
loadReportData();
});
// Refresh states button
refreshStatesBtn.addEventListener('click', function() {
UserStates.refreshUserStates(true); // Force refresh
});
}
/**
* Initialize the week day selector for the weekly view.
* Sets up event listener for day selection changes.
*/
function initializeWeekDaySelector() {
// Add event listener for week day selector
weekDaySelect.addEventListener('change', function() {
AppState.setSelectedWeekDay(this.value);
loadReportData(); // Reload data when day selection changes
});
}
/**
* Populate the week day selector dropdown with days of the current week.
* Fetches days from DateUtils.
*/
function populateWeekDaySelector() {
weekDaySelect.innerHTML = '<option value="">All Week</option>';
const days = DateUtils.getDaysOfWeek();
days.forEach(day => {
const option = document.createElement('option');
option.value = day.date;
option.textContent = `${day.dayName} (${day.displayDate})`;
weekDaySelect.appendChild(option);
});
}
/**
* Update the current date display in the UI (e.g., "May 10, 2024").
* Uses the selected date from AppState.
*/
function updateDateDisplay() {
const currentDateDisplay = document.getElementById('currentDateDisplay');
currentDateDisplay.textContent = DateUtils.formatDateForDisplay(AppState.getSelectedDate());
}
/**
* Navigate to the previous day and reload report data.
* Updates AppState and the UI display.
*/
function goToPreviousDay() {
const newDate = DateUtils.getPreviousDay(AppState.getCurrentDate());
AppState.setCurrentDate(newDate);
updateDateDisplay();
loadReportData();
}
/**
* Navigate to the next day and reload report data.
* Updates AppState and the UI display.
*/
function goToNextDay() {
const newDate = DateUtils.getNextDay(AppState.getCurrentDate());
AppState.setCurrentDate(newDate);
updateDateDisplay();
loadReportData();
}
/**
* Handle date selection from the calendar input.
* Updates AppState, UI display, and reloads report data.
*/
function handleDateSelection() {
if (dateSelector.value) {
AppState.setSelectedDate(dateSelector.value);
updateDateDisplay();
dateSelector.style.display = 'none';
loadReportData();
}
}
/**
* Toggle the visibility of the calendar date input.
*/
function toggleCalendar() {
if (dateSelector.style.display === 'none') {
dateSelector.style.display = 'inline-block';
dateSelector.value = AppState.getSelectedDate();
dateSelector.focus();
} else {
dateSelector.style.display = 'none';
}
}
/**
* Load report data from the API and update the table.
* Manages loading spinner and error messages.
* @param {boolean} [isAutoRefresh=false] - Indicates if the load is triggered by auto-refresh.
* If true, loading spinner is not shown.
* @async
*/
async function loadReportData(isAutoRefresh = false) {
if (isAutoRefresh) {
// For auto-refresh, don't show spinner to avoid UI flicker
// Just update the last refresh time
if (AppState && typeof AppState.setLastRefreshTime === 'function') {
AppState.setLastRefreshTime(Date.now());
}
} else {
loadingSpinner.classList.remove('d-none'); // Show spinner for manual refresh
}
errorMessage.classList.add('d-none'); // Hide error message
try {
// Use the API module to load the report data
const data = await Api.loadReportData(isAutoRefresh);
// Apply default sorting if needed
const sortedData = AppState.getCurrentSortColumn() && AppState.getCurrentSortDirection()
? Filters.sortReportTable(data)
: data;
// Use the TableManager module to populate the table
TableManager.populateTable(sortedData, isAutoRefresh);
// If there's an active state filter, reapply it
if (AppState && typeof AppState.getCurrentStateFilter === 'function' && AppState.getCurrentStateFilter() !== 'all') {
Filters.filterTableByState(AppState.getCurrentStateFilter());
}
} catch (error) {
console.error('Error fetching report data:', error);
errorMessage.textContent = `Error: ${error.message || 'Network error or failed to fetch data.'}`;
errorMessage.classList.remove('d-none');
// Use TableManager to display the error in the table area
TableManager.showReportTableMessage('Failed to load data. See console for details.', true);
} finally {
loadingSpinner.classList.add('d-none'); // Hide spinner
}
}
});

150
static/js/dateUtils.js Normal file
View File

@ -0,0 +1,150 @@
/**
* dateUtils.js - Date manipulation utility functions
*
* This module contains utility functions for formatting and manipulating dates.
*/
/**
* Format date as YYYY-MM-DD for API requests
* @param {Date} date - Date object to format
* @returns {string} Formatted date string
*/
export function formatDateForAPI(date) {
const year = date.getFullYear();
const month = String(date.getMonth() + 1).padStart(2, '0');
const day = String(date.getDate()).padStart(2, '0');
return `${year}-${month}-${day}`;
}
/**
* Format date for display (DD/MM/YYYY)
* @param {string} dateStr - Date string in YYYY-MM-DD format
* @returns {string} Formatted date string for display
*/
export function formatDateForDisplay(dateStr) {
if (!dateStr || !/\d{4}-\d{2}-\d{2}/.test(dateStr)) {
// Return original string or an error/placeholder if format is unexpected
console.warn(`Invalid dateStr format for formatDateForDisplay: ${dateStr}`);
return dateStr;
}
// Parse YYYY-MM-DD as local date to avoid timezone interpretation issues
const parts = dateStr.split('-');
const year = parseInt(parts[0], 10);
const month = parseInt(parts[1], 10) - 1; // JavaScript months are 0-indexed
const day = parseInt(parts[2], 10);
const date = new Date(year, month, day);
return date.toLocaleDateString('en-GB', {
day: '2-digit',
month: '2-digit',
year: 'numeric'
});
}
/**
* Get days of the current week (Monday to Sunday)
* @returns {Array} Array of day objects with date, dayName, and displayDate
*/
export function getDaysOfWeek() {
const today = new Date();
const currentDay = today.getDay(); // 0 is Sunday, 1 is Monday, ...
const mondayOffset = currentDay === 0 ? -6 : 1 - currentDay; // Calculate days from today to Monday
const days = [];
const monday = new Date(today);
monday.setDate(today.getDate() + mondayOffset);
// Generate array with dates for Monday through Sunday
for (let i = 0; i < 7; i++) {
const date = new Date(monday);
date.setDate(monday.getDate() + i);
// Format date using local time
const year = date.getFullYear();
const month = String(date.getMonth() + 1).padStart(2, '0');
const day = String(date.getDate()).padStart(2, '0');
days.push({
date: `${year}-${month}-${day}`, // YYYY-MM-DD format
dayName: date.toLocaleDateString('en-US', { weekday: 'long' }), // Monday, Tuesday, etc.
displayDate: date.toLocaleDateString('en-GB') // DD/MM/YYYY format for display
});
}
return days;
}
/**
* Navigate to previous day
* @param {Date} currentDate - Current date object
* @returns {Date} New date object set to previous day
*/
export function getPreviousDay(currentDate) {
const newDate = new Date(currentDate);
newDate.setDate(newDate.getDate() - 1);
return newDate;
}
/**
* Navigate to next day
* @param {Date} currentDate - Current date object
* @returns {Date} New date object set to next day
*/
export function getNextDay(currentDate) {
const newDate = new Date(currentDate);
newDate.setDate(newDate.getDate() + 1);
return newDate;
}
/**
* Initialize date inputs with current date (today in client's local timezone)
* @returns {string} Today's date in YYYY-MM-DD format
*/
export function getTodayFormatted() {
return formatDateForAPI(new Date());
}
/**
* Get the first day of the current month.
* @returns {string} First day of the current month in YYYY-MM-DD format.
*/
export function getFirstDayOfCurrentMonth() {
const today = new Date();
const firstDay = new Date(today.getFullYear(), today.getMonth(), 1);
return formatDateForAPI(firstDay);
}
/**
* Get the last day of the current month.
* @returns {string} Last day of the current month in YYYY-MM-DD format.
*/
export function getLastDayOfCurrentMonth() {
const today = new Date();
const firstDayOfNextMonth = new Date(today.getFullYear(), today.getMonth() + 1, 1);
const lastDay = new Date(firstDayOfNextMonth - 1); // Subtract 1 millisecond (or 1 day)
return formatDateForAPI(lastDay);
}
/**
* Get the first day (Monday) of the week for a given date.
* @param {Date} [date=new Date()] - The date to get the week for. Defaults to today.
* @returns {string} First day of the week (Monday) in YYYY-MM-DD format.
*/
export function getFirstDayOfWeek(date = new Date()) {
const d = new Date(date);
const day = d.getDay(); // 0 is Sunday, 1 is Monday, ...
const diff = d.getDate() - day + (day === 0 ? -6 : 1); // Adjust when day is Sunday
const monday = new Date(d.setDate(diff));
return formatDateForAPI(monday);
}
/**
* Get the last day (Sunday) of the week for a given date.
* @param {Date} [date=new Date()] - The date to get the week for. Defaults to today.
* @returns {string} Last day of the week (Sunday) in YYYY-MM-DD format.
*/
export function getLastDayOfWeek(date = new Date()) {
const monday = new Date(getFirstDayOfWeek(date)); // Get Monday of that week
const sunday = new Date(monday);
sunday.setDate(monday.getDate() + 6);
return formatDateForAPI(sunday);
}

232
static/js/filters.js Normal file
View File

@ -0,0 +1,232 @@
/**
* filters.js - Filtering and sorting functionality
*
* This module handles all filtering and sorting operations for the dashboard.
*/
import AppState from './state.js';
import { populateTableRows } from './tableManager.js';
import { getUserStateValue, getUserActualState } from './userStates.js';
import { setActiveButton } from './uiHelpers.js';
/**
* Initialize state filters
*/
export function initializeStateFilters() {
const stateFilterAll = document.getElementById('stateFilterAll');
const stateFilterWorking = document.getElementById('stateFilterWorking');
const stateFilterNotWorking = document.getElementById('stateFilterNotWorking');
if (!stateFilterAll || !stateFilterWorking || !stateFilterNotWorking) {
console.error("State filter buttons not found in initializeStateFilters.");
return;
}
// State filter event listeners
stateFilterAll.addEventListener('click', function() {
setActiveStateFilter('all');
filterTableByState('all');
});
stateFilterWorking.addEventListener('click', function() {
setActiveStateFilter('working');
filterTableByState('working');
});
stateFilterNotWorking.addEventListener('click', function() {
setActiveStateFilter('not_working');
filterTableByState('not_working');
});
}
/**
* Set the active state filter button
* @param {string} filterType - Filter type ('all', 'working', 'not_working')
*/
export function setActiveStateFilter(filterType) {
const stateFilterAll = document.getElementById('stateFilterAll');
const stateFilterWorking = document.getElementById('stateFilterWorking');
const stateFilterNotWorking = document.getElementById('stateFilterNotWorking');
if (!stateFilterAll || !stateFilterWorking || !stateFilterNotWorking) {
console.error("State filter buttons not found in setActiveStateFilter.");
return;
}
if (!AppState || typeof AppState.setCurrentStateFilter !== 'function') {
console.error("AppState.setCurrentStateFilter is not available in setActiveStateFilter.");
return;
}
const buttons = [stateFilterAll, stateFilterWorking, stateFilterNotWorking];
let activeButtonElement = stateFilterAll;
switch (filterType) {
case 'working':
activeButtonElement = stateFilterWorking;
break;
case 'not_working':
activeButtonElement = stateFilterNotWorking;
break;
}
setActiveButton(buttons, activeButtonElement); // Use the helper
AppState.setCurrentStateFilter(filterType);
}
/**
* Filter the table by user state
* @param {string} stateFilter - State to filter by ('all', 'working', 'not_working')
*/
export function filterTableByState(stateFilter) {
const reportBody = document.getElementById('reportBody');
if (!AppState || typeof AppState.getReportData !== 'function' || typeof AppState.getCurrentSortColumn !== 'function' || typeof AppState.getCurrentSortDirection !== 'function') {
console.error("AppState or its methods are not available in filterTableByState.");
if (reportBody) reportBody.innerHTML = '';
return;
}
const reportData = AppState.getReportData();
let filteredData;
if (stateFilter === 'all') {
filteredData = Array.isArray(reportData) ? [...reportData] : [];
} else {
filteredData = Array.isArray(reportData) ? reportData.filter(entry => {
const userState = getUserActualState(entry.username);
return userState === stateFilter;
}) : [];
}
let dataToPopulate = filteredData;
if (AppState.getCurrentSortColumn() && AppState.getCurrentSortDirection()) {
dataToPopulate = sortReportTable(filteredData);
}
populateTableRows(dataToPopulate);
}
/**
* Initialize sorting headers
*/
export function initializeSortingHeaders() {
const sortableHeaders = document.querySelectorAll('.sortable');
if (!AppState || typeof AppState.setCurrentSortColumn !== 'function' || typeof AppState.setCurrentSortDirection !== 'function' || typeof AppState.getReportData !== 'function' || typeof AppState.getCurrentStateFilter !== 'function') {
console.error("AppState or its methods are not available in initializeSortingHeaders.");
return;
}
sortableHeaders.forEach(header => {
header.addEventListener('click', function() {
const sortType = this.getAttribute('data-sort');
const currentDir = this.getAttribute('data-dir');
// Remove sorting indicators from all headers
sortableHeaders.forEach(h => {
h.removeAttribute('data-dir');
});
// Set sort direction - toggle between asc/desc or default to asc
let newDir;
if (currentDir === 'asc') {
newDir = 'desc';
} else {
newDir = 'asc';
}
// Update the header with new direction
this.setAttribute('data-dir', newDir);
// Update sort state
AppState.setCurrentSortColumn(sortType);
AppState.setCurrentSortDirection(newDir);
// Get current data, potentially filtered by active state filter
let dataToProcess = AppState.getReportData();
const activeStateFilter = AppState.getCurrentStateFilter();
if (activeStateFilter && activeStateFilter !== 'all') {
// Ensure reportData is an array before filtering
if (Array.isArray(dataToProcess)) {
dataToProcess = dataToProcess.filter(entry => {
const userState = getUserActualState(entry.username);
return userState === activeStateFilter;
});
} else {
dataToProcess = []; // Or handle as an error
}
}
// Sort the potentially filtered data
const sortedData = sortReportTable(dataToProcess);
// Populate the table with the sorted data
if (sortedData) {
populateTableRows(sortedData);
}
});
});
}
/**
* Sort the report table data.
* @param {Array} dataToSort - Data to sort.
* @returns {Array} - The sorted data array.
*/
export function sortReportTable(dataToSort) {
if (!AppState || typeof AppState.getCurrentSortColumn !== 'function' || typeof AppState.getCurrentSortDirection !== 'function') {
console.error("AppState or its methods are not available in sortReportTable.");
return dataToSort; // Return original data if AppState is not ready
}
const currentSortColumn = AppState.getCurrentSortColumn();
const currentSortDirection = AppState.getCurrentSortDirection();
if (!currentSortColumn || !currentSortDirection) {
return;
}
// Use provided data or fallback to reportData
const data = dataToSort || AppState.getReportData();
if (data.length === 0) {
return;
}
const sortedData = [...data].sort((a, b) => {
let comparison = 0;
// Sort based on column type
switch (currentSortColumn) {
case 'duration':
// Convert duration to number for comparison
const durA = a.real_hours_counted === null ? -1 : a.real_hours_counted;
const durB = b.real_hours_counted === null ? -1 : b.real_hours_counted;
comparison = durA - durB;
break;
case 'login':
// Sort by login time
const timeA = a.first_login_time || '';
const timeB = b.first_login_time || '';
comparison = timeA.localeCompare(timeB);
break;
case 'state':
// Sort by user state
const stateA = getUserStateValue(a.username);
const stateB = getUserStateValue(b.username);
comparison = stateA - stateB;
break;
}
// Reverse for descending order
return currentSortDirection === 'desc' ? -comparison : comparison;
});
// Update table with sorted data
// const reportBody = document.getElementById('reportBody');
// reportBody.innerHTML = '';
// populateTableRows(sortedData);
return sortedData; // Return the sorted data array
}

54
static/js/formatters.js Normal file
View File

@ -0,0 +1,54 @@
/**
* formatters.js - Data formatting utilities
*
* This module contains utility functions for formatting time, durations, and other data.
*/
/**
* Format time to GMT+4 timezone (Asia/Dubai)
* This timezone is used consistently throughout the application:
* - PostgreSQL database is set to Asia/Dubai timezone (UTC+4)
* - Backend uses UTC internally but stores in Asia/Dubai in database
* - All displayed times on frontend use this timezone for consistency
*
* @param {string} dateTimeStr - ISO date-time string
* @returns {string} Formatted time string in GMT+4
*/
export function formatTimeToGMT4(dateTimeStr) {
if (!dateTimeStr) return 'N/A';
try {
const date = new Date(dateTimeStr);
// Format the time using the Etc/GMT-4 timezone, which represents GMT+4.
// The locale 'en-US' or 'en-GB' can be used for formatting conventions like AM/PM.
return date.toLocaleTimeString('en-US', {
hour: '2-digit',
minute: '2-digit',
hour12: true,
timeZone: 'Etc/GMT-4'
}) + ' (GMT+4)';
} catch (error) {
console.error("Error formatting time to GMT+4:", error, "Input:", dateTimeStr);
return 'Invalid Date'; // Or return N/A or the original string
}
}
/**
* Convert decimal hours to hours and minutes format
* @param {number} decimalHours - Hours in decimal format
* @returns {string} Formatted duration string
*/
export function formatDurationHours(decimalHours) {
if (decimalHours === null || decimalHours === undefined) return 'N/A';
const hours = Math.floor(decimalHours);
const minutes = Math.round((decimalHours - hours) * 60);
if (hours === 0) {
return `${minutes} min`;
} else if (minutes === 0) {
return `${hours} hr`;
} else {
return `${hours} hr ${minutes} min`;
}
}

164
static/js/state.js Normal file
View File

@ -0,0 +1,164 @@
/**
* state.js - Centralized state management for the dashboard
*
* This module maintains the application state and provides getters/setters
* to allow other modules to interact with the state in a controlled manner.
*/
import { formatDateForAPI } from './dateUtils.js'; // Import from dateUtils
// State container - private to this module
const state = {
// Current view/period
currentPeriod: 'daily', // 'daily', 'weekly', 'monthly'
selectedDate: formatDateForAPI(new Date()), // Current date in YYYY-MM-DD format
currentDate: new Date(), // Date object for the selected date
selectedWeekDay: null, // Selected day within weekly view
// Table data and filtering
reportData: [], // Currently displayed report data
currentStateFilter: 'all', // 'all', 'working', 'not_working'
userFilterText: '', // Text entered in the user filter input
// Sorting
currentSortColumn: 'duration', // Which column is being sorted - default to duration (real work hours)
currentSortDirection: 'desc', // 'asc' or 'desc' - default to descending
// User states
userStates: {}, // Cache of user states {username: state}
userStatesLastUpdate: 0, // Timestamp of last update
STATE_CACHE_DURATION: 5000, // 5 seconds
// Auto-refresh
autoRefreshEnabled: true, // Auto-refresh enabled by default
autoRefreshInterval: 60000, // Default refresh interval (60 seconds). Aligns with client-side 1-minute reporting.
autoRefreshTimer: null, // Holds the timer reference
lastRefreshTime: Date.now(), // Last time data was refreshed
countdownTimer: null, // Timer for updating countdown display
};
// Exported getters and setters
export default {
// Period/view getters and setters
getCurrentPeriod: () => state.currentPeriod,
setCurrentPeriod: (period) => {
state.currentPeriod = period;
return state.currentPeriod;
},
getSelectedDate: () => state.selectedDate,
setSelectedDate: (dateStr) => {
state.selectedDate = dateStr;
// Parse YYYY-MM-DD as local date to avoid timezone interpretation issues
if (dateStr && typeof dateStr === 'string' && dateStr.includes('-')) {
const parts = dateStr.split('-');
if (parts.length === 3) {
const year = parseInt(parts[0], 10);
const month = parseInt(parts[1], 10) - 1; // JavaScript months are 0-indexed
const day = parseInt(parts[2], 10);
// Check if parts are valid numbers before creating date
if (!isNaN(year) && !isNaN(month) && !isNaN(day)) {
state.currentDate = new Date(year, month, day);
} else {
console.warn(`Invalid date parts in setSelectedDate: ${dateStr}`);
state.currentDate = new Date(); // Fallback or handle error appropriately
}
} else {
console.warn(`Invalid date string format in setSelectedDate: ${dateStr}`);
state.currentDate = new Date(); // Fallback
}
} else {
console.warn(`Invalid dateStr input to setSelectedDate: ${dateStr}`);
state.currentDate = new Date(); // Fallback for null or non-string input
}
return state.selectedDate;
},
getCurrentDate: () => state.currentDate,
setCurrentDate: (date) => {
state.currentDate = date;
state.selectedDate = formatDateForAPI(date);
return state.currentDate;
},
getSelectedWeekDay: () => state.selectedWeekDay,
setSelectedWeekDay: (day) => {
state.selectedWeekDay = day;
return state.selectedWeekDay;
},
// Report data getters and setters
getReportData: () => state.reportData,
setReportData: (data) => {
state.reportData = [...data];
return state.reportData;
},
// Filter getters and setters
getCurrentStateFilter: () => state.currentStateFilter,
setCurrentStateFilter: (filter) => {
state.currentStateFilter = filter;
return state.currentStateFilter;
},
getUserFilterText: () => state.userFilterText,
setUserFilterText: (text) => {
state.userFilterText = text;
return state.userFilterText;
},
// Sort getters and setters
getCurrentSortColumn: () => state.currentSortColumn,
setCurrentSortColumn: (column) => {
state.currentSortColumn = column;
return state.currentSortColumn;
},
getCurrentSortDirection: () => state.currentSortDirection,
setCurrentSortDirection: (direction) => {
state.currentSortDirection = direction;
return state.currentSortDirection;
},
// User states getters and setters
getUserStates: () => state.userStates,
setUserStates: (states) => {
state.userStates = {...states};
state.userStatesLastUpdate = Date.now();
return state.userStates;
},
getUserStatesLastUpdate: () => state.userStatesLastUpdate,
getStateCacheDuration: () => state.STATE_CACHE_DURATION,
// Auto-refresh getters and setters
isAutoRefreshEnabled: () => state.autoRefreshEnabled,
setAutoRefreshEnabled: (enabled) => {
state.autoRefreshEnabled = enabled;
return state.autoRefreshEnabled;
},
getAutoRefreshInterval: () => state.autoRefreshInterval,
setAutoRefreshInterval: (interval) => {
state.autoRefreshInterval = interval;
return state.autoRefreshInterval;
},
getAutoRefreshTimer: () => state.autoRefreshTimer,
setAutoRefreshTimer: (timer) => {
state.autoRefreshTimer = timer;
return state.autoRefreshTimer;
},
getLastRefreshTime: () => state.lastRefreshTime,
setLastRefreshTime: (time) => {
state.lastRefreshTime = time;
return state.lastRefreshTime;
},
getCountdownTimer: () => state.countdownTimer,
setCountdownTimer: (timer) => {
state.countdownTimer = timer;
return state.countdownTimer;
}
};

275
static/js/tableManager.js Normal file
View File

@ -0,0 +1,275 @@
/**
* tableManager.js - Table management functionality
*
* This module handles all table-related operations including populating,
* updating, and managing the data display in tables.
*/
import AppState from './state.js';
// import { formatTimeToGMT4, formatDurationHours } from './formatters.js'; // formatTimeToGMT4 likely not needed
import { formatDurationHours } from './formatters.js'; // Keep for consistency if displaying hours
import { formatDateForDisplay } from './dateUtils.js';
import { getUserStateDisplay } from './userStates.js';
// The userActivityModal is invoked via a custom event 'user-activity-requested'.
// Direct import of userActivity.js is not needed here as userActivity.js listens for the event.
// // Import will be added when userActivity module is created
// // import { showUserActivityModal } from './userActivity.js';
/**
* Helper function to determine if the "First Login Time" column should be hidden.
* This column is being removed for the new real_work_hours display.
* @returns {boolean} - True, as it's always hidden now.
*/
function _alwaysHideLoginTime() {
return true;
}
/**
* Helper function to show a message in the table body with correct colspan.
* @param {string} message - The message to display.
* @param {string} cssClass - CSS class for the message (e.g., 'text-danger', 'text-muted').
*/
function _showTableMessage(message, cssClass = 'text-muted') {
const reportBody = document.getElementById('reportBody');
if (!reportBody) {
console.error("Table reportBody not found in _showTableMessage.");
return;
}
// Try to determine current period and day for colspan, even if AppState is partially available
let currentPeriodForColspan = 'daily'; // Default if AppState not fully usable
let selectedWeekDayForColspan = null;
if (AppState && typeof AppState.getCurrentPeriod === 'function') {
currentPeriodForColspan = AppState.getCurrentPeriod();
}
if (AppState && typeof AppState.getSelectedWeekDay === 'function') {
selectedWeekDayForColspan = AppState.getSelectedWeekDay();
}
const hideLogin = _alwaysHideLoginTime();
const colspan = hideLogin ? 3 : 4;
reportBody.innerHTML = `<tr><td colspan="${colspan}" class="text-center ${cssClass}">${message}</td></tr>`;
}
/**
* Public function to show a message (e.g., error or custom status) in the main report table.
* @param {string} message - The message to display.
* @param {boolean} isError - If true, cssClass will be 'text-danger', otherwise 'text-muted'.
*/
export function showReportTableMessage(message, isError = false) {
_showTableMessage(message, isError ? 'text-danger' : 'text-muted');
}
/**
* Populate the main report table with data
* @param {Array} data - Report data to display
* @param {boolean} isAutoRefresh - Whether this is an auto-refresh operation
*/
export function populateTable(data, isAutoRefresh = false) {
const reportBody = document.getElementById('reportBody');
const periodHeader = document.getElementById('periodHeader');
if (!AppState || typeof AppState.setReportData !== 'function' || typeof AppState.getCurrentPeriod !== 'function' || typeof AppState.getSelectedWeekDay !== 'function') {
console.error("AppState or its methods not available in populateTable");
_showTableMessage("Error loading table: AppState not ready.", "text-danger");
return;
}
if (!reportBody || !periodHeader) {
console.error("Table reportBody or periodHeader not found in populateTable.");
// If reportBody is missing, _showTableMessage won't work either.
return;
}
AppState.setReportData([...data]);
if (!isAutoRefresh) {
const currentPeriod = AppState.getCurrentPeriod();
switch (currentPeriod) {
case 'daily': periodHeader.textContent = 'Day'; break;
case 'weekly': periodHeader.textContent = 'Date'; break;
case 'monthly': periodHeader.textContent = 'Date'; break;
default: periodHeader.textContent = 'Date';
}
// Remove this block as it's hiding the State column header
// const firstLoginHeader = document.querySelector('#reportTable th:nth-child(4)');
// const hideLoginTime = _alwaysHideLoginTime();
// if (firstLoginHeader) {
// firstLoginHeader.style.display = hideLoginTime ? 'none' : '';
// } else {
// console.warn("First login time header cell not found for show/hide logic.");
// }
// Removed the data.length === 0 check here, populateTableRows will handle it.
}
if (isAutoRefresh && reportBody.children.length > 0 && data.length > 0) { // Only update if there's data
updateTableData(data);
} else {
populateTableRows(data); // Let populateTableRows handle empty data message too
}
}
/**
* Populate table rows with data
* @param {Array} data - Data to display in the table
*/
export function populateTableRows(data) {
const reportBody = document.getElementById('reportBody');
if (!AppState || typeof AppState.getCurrentPeriod !== 'function' || typeof AppState.getSelectedWeekDay !== 'function') {
console.error("AppState or its methods not available in populateTableRows");
_showTableMessage("Error populating rows: AppState not ready.", "text-danger");
return;
}
// reportBody check is now implicitly handled by _showTableMessage if we reach here
// but _showTableMessage has its own check.
const currentPeriod = AppState.getCurrentPeriod();
const selectedWeekDay = AppState.getSelectedWeekDay();
const hideLogin = _alwaysHideLoginTime();
reportBody.innerHTML = '';
if (!data || data.length === 0) {
// Use the helper for "no data" message as well
_showTableMessage("No data available.", "text-muted");
return;
}
data.forEach(entry => {
const row = document.createElement('tr');
let periodValue = '';
// Determine which period key to use based on currentPeriod
switch (currentPeriod) {
case 'daily': periodValue = entry.day; break;
case 'weekly': periodValue = entry.week_start; break;
case 'monthly': periodValue = entry.month_start; break;
}
// Format date string from ISO (YYYY-MM-DD) to DD/MM/YYYY
let formattedPeriod = formatDateForDisplay(entry.work_date);
// Get user state
const userStateDisplay = getUserStateDisplay(entry.username);
// Show/hide First Login Time column based on period and day selection
// const hideLogin = _alwaysHideLoginTime();
// Already defined above
if (hideLogin) {
// Don't include First Login Time cell
row.innerHTML = `
<td><a class="user-link" data-user="${entry.username}">${entry.username}</a></td>
<td>${formattedPeriod || 'N/A'}</td>
<td>${formatDurationHours(entry.real_hours_counted)}</td>
<td class="user-state-cell">${userStateDisplay}</td>
`;
}
reportBody.appendChild(row);
});
// Remove old event listeners if any were attached directly, though innerHTML='' should handle it.
// Add a single event listener to reportBody for user links (event delegation)
// This replaces: document.querySelectorAll('.user-link').forEach(link => { ... });
// Ensure only one listener is attached, remove if already present
if (reportBody._userLinkClickListener) {
reportBody.removeEventListener('click', reportBody._userLinkClickListener);
}
reportBody._userLinkClickListener = function(event) {
const targetLink = event.target.closest('.user-link');
if (targetLink) {
event.preventDefault(); // Prevent default anchor action if any
const username = targetLink.getAttribute('data-user');
if (username) {
// Dispatch a custom event. userActivity.js listens for this event
// to show the modal, as initialized in dashboard.js.
const customEvent = new CustomEvent('user-activity-requested', {
detail: { username: username }
});
document.dispatchEvent(customEvent);
}
}
};
reportBody.addEventListener('click', reportBody._userLinkClickListener);
}
/**
* Update existing table data without full redraw (for smoother auto-refresh)
* @param {Array} data - Updated data to display
*/
export function updateTableData(data) {
const reportBody = document.getElementById('reportBody');
if (!AppState || typeof AppState.getCurrentPeriod !== 'function' || typeof AppState.getSelectedWeekDay !== 'function') {
console.error("AppState or its methods not available in updateTableData");
return;
}
if (!reportBody) {
console.error("Table reportBody not found in updateTableData.");
return;
}
// Log update for debugging
console.log('Updating table data during auto-refresh:', data);
// Create a map of username + work_date to data for quick lookup
const dataMap = {};
data.forEach(entry => {
// Use a composite key of username and work_date to uniquely identify each row
const key = `${entry.username}_${entry.work_date}`;
dataMap[key] = entry;
});
// For each row in the table, update the data
const rows = reportBody.querySelectorAll('tr');
rows.forEach(row => {
const userLink = row.querySelector('.user-link');
if (!userLink) return;
const username = userLink.getAttribute('data-user');
// Get the date from the second cell
const dateCell = row.querySelector('td:nth-child(2)');
if (!dateCell) return;
// Try to find matching data for this row
// We need to try different formats since the display format might differ from the API format
const displayedDate = dateCell.textContent;
// First try direct match with username only (if we don't have date in key)
let userData = null;
// Try to match by username and work_date from the data
for (const entry of data) {
if (entry.username === username) {
// Format the date for display to compare with what's in the cell
const formattedDate = formatDateForDisplay(entry.work_date);
if (formattedDate === displayedDate || displayedDate === 'N/A') {
userData = entry;
break;
}
}
}
if (userData) {
// Update duration - make sure to use real_hours_counted
const durationCell = row.querySelector('td:nth-child(3)');
if (durationCell) {
durationCell.textContent = formatDurationHours(userData.real_hours_counted);
// Log successful updates for debugging
console.log(`Updated ${username} real_hours_counted to ${userData.real_hours_counted}`);
}
// Update state
const stateCell = row.querySelector('td:last-child');
if (stateCell) {
stateCell.innerHTML = getUserStateDisplay(username);
}
} else {
console.warn(`No matching data found for user ${username} with date ${displayedDate}`);
}
});
}

125
static/js/uiHelpers.js Normal file
View File

@ -0,0 +1,125 @@
/**
* uiHelpers.js - UI helper functions
*
* This module contains utility functions for UI operations like showing messages,
* handling toasts, managing UI visibility, etc.
*/
import { formatDateForDisplay } from './dateUtils.js'; // Use from dateUtils for consistency
// UI toast instance
let toastInstance = null;
/**
* Shows a log message in the toast notification
* @param {string} message - Message to display
*/
export function showLogMessage(message) {
const logToast = document.getElementById('logToast');
const logMessage = document.getElementById('logMessage');
const logTime = document.getElementById('logTime');
if (!logToast || !logMessage || !logTime) {
console.warn("Toast UI elements not found for showLogMessage.");
// Still log to console as a fallback
console.log(`[Toast Omitted - UI Missing] [${new Date().toLocaleTimeString()}] ${message}`);
return;
}
// Update toast content
logMessage.textContent = message;
logTime.textContent = new Date().toLocaleTimeString();
// Initialize the toast if not already done
if (!toastInstance) {
toastInstance = new bootstrap.Toast(logToast, {
autohide: true,
delay: 5000
});
}
// Show the toast
toastInstance.show();
// Also log to console for debugging
console.log(`[${new Date().toLocaleTimeString()}] ${message}`);
}
/**
* Show error message in the main error container
* @param {string} message - Error message to display
*/
export function showError(message) {
const errorMessage = document.getElementById('errorMessage');
const reportBody = document.getElementById('reportBody');
if (!errorMessage) {
console.warn("Error message UI element not found for showError.");
// If only errorMessage is missing, other parts might still be attempted if reportBody exists,
// but for consistency, perhaps return early or make reportBody optional.
// For now, let's ensure errorMessage exists.
return;
}
errorMessage.textContent = `Error: ${message}`;
errorMessage.classList.remove('d-none');
if (reportBody) {
// Assuming 5 columns is a safe maximum or most common case for the main table error display
reportBody.innerHTML = '<tr><td colspan="5" class="text-center text-danger">Failed to load data. See console for details.</td></tr>';
} else {
console.warn("Report body UI element not found for showError to clear table.");
}
}
/**
* Show error message in the modal error container
* @param {string} message - Error message to display
*/
export function showModalError(message) {
const modalErrorMessage = document.getElementById('modalErrorMessage');
const userActivityBody = document.getElementById('userActivityBody');
if (!modalErrorMessage) {
console.warn("Modal error message UI element not found for showModalError.");
return;
}
modalErrorMessage.textContent = `Error: ${message}`;
modalErrorMessage.classList.remove('d-none');
if (userActivityBody) {
// Assuming 4 columns for the user activity table error display
userActivityBody.innerHTML = '<tr><td colspan="4" class="text-center text-danger">Failed to load activity data.</td></tr>';
} else {
console.warn("User activity body UI element not found for showModalError to clear table.");
}
}
/**
* Update the active button state in a button group
* @param {HTMLElement[]} buttons - Array of button elements
* @param {HTMLElement} activeButton - Button to set as active
*/
export function setActiveButton(buttons, activeButton) {
// Remove active class from all buttons
buttons.forEach(btn => btn.classList.remove('active'));
// Add active class to the specified button
activeButton.classList.add('active');
}
/**
* Update the date display in the UI
* @param {string} dateString - Date string in YYYY-MM-DD format
*/
export function updateDateDisplay(dateString) {
const currentDateDisplay = document.getElementById('currentDateDisplay');
if (!currentDateDisplay) {
console.warn("Current date display UI element not found for updateDateDisplay.");
return;
}
currentDateDisplay.textContent = formatDateForDisplay(dateString); // Use from dateUtils
}

143
static/js/userActivity.js Normal file
View File

@ -0,0 +1,143 @@
/**
* userActivity.js - User activity modal functionality
*
* This module handles all functionality related to the user activity modal.
*/
import { loadUserActivityData } from './api.js';
import { formatTimeToGMT4, formatDurationHours } from './formatters.js';
import { formatDateForDisplay } from './dateUtils.js';
import { showModalError } from './uiHelpers.js';
// User activity modal elements references
let userActivityModal = null;
let modalUsername = null;
let startDateInput = null;
let endDateInput = null;
let applyDateRangeBtn = null;
let userActivityBody = null;
let modalLoadingSpinner = null;
let modalErrorMessage = null;
let userActivityModalInstance = null;
/**
* Initialize user activity modal component
*/
export function initializeUserActivityModal() {
// Get references to all modal elements
userActivityModal = document.getElementById('userActivityModal');
modalUsername = document.getElementById('modalUsername');
startDateInput = document.getElementById('startDate');
endDateInput = document.getElementById('endDate');
applyDateRangeBtn = document.getElementById('applyDateRange');
userActivityBody = document.getElementById('userActivityBody');
modalLoadingSpinner = document.getElementById('modalLoadingSpinner');
modalErrorMessage = document.getElementById('modalErrorMessage');
// Check if essential modal elements are found
if (!userActivityModal || !modalUsername || !startDateInput || !endDateInput || !applyDateRangeBtn || !userActivityBody || !modalLoadingSpinner || !modalErrorMessage) {
console.error("One or more essential user activity modal UI elements are missing from the DOM.");
// Optionally, disable functionality that relies on these elements or return early
return;
}
// Initialize date inputs with current date
const today = new Date().toISOString().split('T')[0];
startDateInput.value = today;
endDateInput.value = today;
// Add event listener for Apply Date Range button
applyDateRangeBtn.addEventListener('click', function() {
const username = modalUsername.textContent;
loadActivityData(username);
});
// Listen for custom event from table manager
document.addEventListener('user-activity-requested', function(event) {
const username = event.detail.username;
showUserActivityModal(username);
});
}
/**
* Show user activity modal for a specific user
* @param {string} username - Username to show activity for
*/
export function showUserActivityModal(username) {
// if (!userActivityModal) { // This check is likely redundant if initializeUserActivityModal is always called first from dashboard.js
// initializeUserActivityModal();
// }
// Ensure critical elements were initialized if the above check is removed or failed previously.
if (!userActivityModal || !modalUsername || !userActivityBody || !modalErrorMessage) {
console.error("User activity modal called before essential UI elements were ready or found.");
// Attempt to show a general error or prevent modal from showing an error state.
// This might involve a global error display if modal itself can't function.
alert("Error: Activity modal components not loaded. Please refresh."); // Basic fallback
return;
}
modalUsername.textContent = username;
userActivityBody.innerHTML = '';
modalErrorMessage.classList.add('d-none');
// Initialize and show modal with Bootstrap 5
if (!userActivityModalInstance) {
userActivityModalInstance = new bootstrap.Modal(userActivityModal);
}
userActivityModalInstance.show();
// Load user activity data
loadActivityData(username);
}
/**
* Load activity data for a specific user
* @param {string} username - Username to load data for
*/
export async function loadActivityData(username) {
modalLoadingSpinner.classList.remove('d-none');
userActivityBody.innerHTML = '';
modalErrorMessage.classList.add('d-none');
const startDate = startDateInput.value;
const endDate = endDateInput.value;
try {
const activities = await loadUserActivityData(username, startDate, endDate);
populateUserActivityTable(activities);
} catch (error) {
console.error('Error in loadActivityData for user:', username, error);
showModalError(error.message || 'Failed to retrieve activity details.'); // Simplified message
} finally {
modalLoadingSpinner.classList.add('d-none');
}
}
/**
* Populate user activity table with data
* @param {Array} activities - Activity data to display
*/
export function populateUserActivityTable(activities) {
if (activities.length === 0) {
userActivityBody.innerHTML = '<tr><td colspan="4" class="text-center text-muted">No activity found for the selected date range.</td></tr>';
return;
}
activities.forEach(activity => {
const row = document.createElement('tr');
// Format date and times for display
const date = activity.date ? formatDateForDisplay(activity.date) : 'N/A';
const startTime = formatTimeToGMT4(activity.start_time);
const endTime = formatTimeToGMT4(activity.end_time);
row.innerHTML = `
<td>${date}</td>
<td>${startTime}</td>
<td>${endTime}</td>
<td>${formatDurationHours(activity.duration_hours)}</td>
`;
userActivityBody.appendChild(row);
});
}

136
static/js/userStates.js Normal file
View File

@ -0,0 +1,136 @@
/**
* userStates.js - User state management
*
* This module handles all functionality related to user working states.
*/
import AppState from './state.js';
import { fetchUserStates } from './api.js';
/**
* Get user state display HTML
* @param {string} username - Username to get state for
* @returns {string} - HTML string for displaying the user state
*/
export function getUserStateDisplay(username) {
const state = getUserActualState(username);
if (state === 'working') {
return '<span class="badge bg-success">Working</span>';
} else if (state === 'not_working') {
return '<span class="badge bg-secondary">Not Working</span>';
} else {
return '<span class="badge bg-light text-dark">Unknown</span>';
}
}
/**
* Get the actual state for a user (from API or simulation)
* @param {string} username - Username to get state for
* @returns {string} - User state ('working', 'not_working', or 'unknown')
*/
export function getUserActualState(username) {
// const userStates = AppState.getUserStates(); // This line is now inside the AppState check
// Removed obsolete simulation logic:
// [Multi-line commented out block that was here previously]
// Return state from API if available, otherwise 'unknown'
// Ensure AppState and getUserStates are available before calling
if (AppState && typeof AppState.getUserStates === 'function') {
const states = AppState.getUserStates();
return states[username] || 'unknown';
}
console.warn("AppState.getUserStates is not available in getUserActualState. Returning 'unknown'.");
return 'unknown'; // Fallback if AppState is not ready
}
/**
* Get state value for sorting (numeric value)
* @param {string} username - Username to get state value for
* @returns {number} - Numeric state value for sorting
*/
export function getUserStateValue(username) {
// Return a numeric value for sorting states
// Working (2) > Unknown (1) > Not Working (0)
const state = getUserActualState(username);
if (state === 'working') {
return 2;
} else if (state === 'unknown') {
return 1;
} else {
return 0;
}
}
/**
* Update displayed states in the table
*/
export function updateTableStates() {
// Get all user rows
const userRows = document.querySelectorAll('.user-link');
userRows.forEach(userLink => {
const username = userLink.getAttribute('data-user');
const row = userLink.closest('tr');
const stateCell = row.querySelector('.user-state-cell');
if (stateCell) {
stateCell.innerHTML = getUserStateDisplay(username);
}
});
}
/**
* Refresh user states from server
* @param {boolean} forceRefresh - Force refresh even if cache is valid
* @returns {Promise<Object>} - Promise resolving to user states
*/
export async function refreshUserStates(forceRefresh = false) {
const refreshStatesBtn = document.getElementById('refreshStates');
// Check if AppState and its methods are available
if (!AppState || typeof AppState.isAutoRefreshEnabled !== 'function') {
console.error("AppState or AppState.isAutoRefreshEnabled is not available in refreshUserStates.");
// Attempt to restore button if it exists, then return
if (refreshStatesBtn) {
refreshStatesBtn.innerHTML = '<i class="bi bi-arrow-clockwise"></i> Refresh States';
refreshStatesBtn.disabled = false;
}
return Promise.reject(new Error("AppState not ready for refreshing user states.")); // Return a rejected promise
}
// Show a small spinner only if force refresh and not auto-refresh
if (refreshStatesBtn && forceRefresh && !AppState.isAutoRefreshEnabled()) {
refreshStatesBtn.innerHTML = '<span class="spinner-border spinner-border-sm" role="status" aria-hidden="true"></span> Refreshing...';
refreshStatesBtn.disabled = true;
} else if (!refreshStatesBtn && forceRefresh && !AppState.isAutoRefreshEnabled()) {
console.warn("refreshStatesBtn not found, cannot update UI during state refresh.");
}
try {
// Fetch the latest user states
const states = await fetchUserStates(forceRefresh);
// Update existing rows with new states
updateTableStates();
// If we're filtering by state, reapply the filter with the new state data
// This is likely handled by dashboard.js after loadReportData completes which includes state updates.
// Commenting out to avoid potential double filtering or race conditions.
// if (AppState.getCurrentStateFilter() !== 'all') {
// // Note: This requires the filters module
// // Will need to be imported or called through a callback
// console.log('State filter reapplication from userStates.js - potentially redundant');
// // Filters.filterTableByState(AppState.getCurrentStateFilter()); // Example if it were to be called here
// }
return states;
} finally {
if (refreshStatesBtn && forceRefresh && !AppState.isAutoRefreshEnabled()) {
refreshStatesBtn.innerHTML = '<i class="bi bi-arrow-clockwise"></i> Refresh States';
refreshStatesBtn.disabled = false;
}
}
}

213
templates/dashboard.html Normal file
View File

@ -0,0 +1,213 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Employee Activity Dashboard</title>
<!-- Favicon -->
<link rel="icon" href="{{ url_for('static', filename='favicon.ico') }}" type="image/x-icon">
<!-- Bootstrap CSS -->
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css" rel="stylesheet">
<!-- Bootstrap Icons -->
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap-icons@1.10.0/font/bootstrap-icons.css">
<!-- Custom CSS -->
<link rel="stylesheet" href="{{ url_for('static', filename='css/dashboard.css') }}">
</head>
<body>
<div class="container">
<header class="mb-4">
<h1 class="text-center my-4">Employee Working Time Report</h1>
<p class="text-center text-muted">Aggregated working hours by user</p>
</header>
<div class="row">
<!-- Time Filter & User Filter -->
<div class="col-12 mb-3">
<div class="card">
<div class="card-header">Filters</div>
<div class="card-body d-flex justify-content-between align-items-center">
<div>
<span class="me-2 fw-bold">Time Period:</span>
<div class="btn-group">
<button class="btn btn-outline-primary btn-sm btn-filter active" data-period="daily">Today</button>
<button class="btn btn-outline-primary btn-sm btn-filter" data-period="weekly">This Week</button>
<button class="btn btn-outline-primary btn-sm btn-filter" data-period="monthly">This Month</button>
</div>
<!-- Date navigation controls (initially hidden) -->
<div id="dateNavControls" class="mt-2 d-none">
<div class="d-flex align-items-center">
<button id="prevDateBtn" class="btn btn-sm btn-outline-secondary me-2">
<i class="bi bi-chevron-left"></i>
</button>
<span id="currentDateDisplay" class="me-2"></span>
<button id="nextDateBtn" class="btn btn-sm btn-outline-secondary me-2">
<i class="bi bi-chevron-right"></i>
</button>
<button id="calendarBtn" class="btn btn-sm btn-outline-secondary" title="Select specific date">
<i class="bi bi-calendar3"></i>
</button>
<input type="date" id="dateSelector" class="form-control form-control-sm ms-2" style="display: none;">
</div>
</div>
<!-- Day selector for weekly view (initially hidden) -->
<div id="weekDaySelector" class="mt-2 d-none">
<span class="me-2">Select day:</span>
<select id="weekDaySelect" class="form-select form-select-sm" style="display: inline-block;">
<!-- Options will be populated by JavaScript -->
</select>
</div>
</div>
<div class="d-flex align-items-center">
<label for="userFilter" class="form-label me-2 mb-0 fw-bold">User:</label>
<div id="userFilterInputGroup" class="input-group">
<input type="text" id="userFilter" class="form-control form-control-sm" placeholder="Filter by username...">
<button class="btn btn-outline-secondary btn-sm" type="button" id="clearUserFilter">
<i class="bi bi-x"></i> Clear
</button>
</div>
</div>
</div>
</div>
</div>
</div>
<!-- State Filter -->
<div class="row">
<div class="col-12 mb-3">
<div class="card">
<div class="card-header">State Filter</div>
<div class="card-body">
<div class="d-flex justify-content-between align-items-center">
<div class="btn-group" role="group" aria-label="State filter">
<button type="button" class="btn btn-outline-secondary active" id="stateFilterAll">All</button>
<button type="button" class="btn btn-outline-success" id="stateFilterWorking">Working</button>
<button type="button" class="btn btn-outline-secondary" id="stateFilterNotWorking">Not Working</button>
</div>
<button type="button" class="btn btn-outline-primary" id="refreshStates">
<i class="bi bi-arrow-clockwise"></i> Refresh States
</button>
<!-- Add UI elements for auto-refresh near the Refresh States button -->
<button id="autoRefreshBtn" class="btn btn-sm btn-outline-info ms-2">
<i class="bi bi-pause-circle"></i> Pause Auto-refresh
</button>
<span id="autoRefreshIndicator" class="badge bg-light text-dark ms-2">Auto-refresh in 30s</span>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<!-- Working Time Report Table -->
<div class="col-12">
<div class="card">
<div class="card-header d-flex justify-content-between align-items-center">
<span>Working Time Report</span>
<div id="loadingSpinner" class="spinner-border text-primary d-none" role="status">
<span class="visually-hidden">Loading...</span>
</div>
</div>
<div class="card-body">
<div class="table-responsive">
<table class="table table-striped table-hover" id="reportTable">
<thead>
<tr>
<th>User</th>
<th id="periodHeader">Day</th> <!-- Will be updated by JS -->
<th class="sortable" data-sort="duration">Real Work Hours <i class="bi bi-sort-down sort-icon"></i></th>
<th class="sortable" data-sort="state">State <i class="bi bi-sort-down sort-icon"></i></th>
</tr>
</thead>
<tbody id="reportBody">
<!-- Report data will be populated via JavaScript -->
<tr>
<td colspan="4" class="text-center text-muted">Select filters and load data.</td>
</tr>
</tbody>
</table>
</div>
<div id="errorMessage" class="alert alert-danger d-none mt-3" role="alert">
Error loading report data. Please try again.
</div>
</div>
</div>
</div>
</div>
</div>
<!-- User Activity Details Modal -->
<div class="modal fade" id="userActivityModal" tabindex="-1" aria-labelledby="userActivityModalLabel" aria-hidden="true">
<div class="modal-dialog modal-lg">
<div class="modal-content">
<div class="modal-header">
<h5 class="modal-title" id="userActivityModalLabel">User Activity Details</h5>
<button type="button" class="btn-close" data-bs-dismiss="modal" aria-label="Close"></button>
</div>
<div class="modal-body">
<div class="d-flex justify-content-between mb-3">
<div>
<strong>User:</strong> <span id="modalUsername"></span>
</div>
<div>
<div class="input-group input-group-sm">
<span class="input-group-text">Date Range</span>
<input type="date" id="startDate" class="form-control">
<span class="input-group-text">to</span>
<input type="date" id="endDate" class="form-control">
<button class="btn btn-outline-primary" id="applyDateRange">Apply</button>
</div>
</div>
</div>
<div id="modalLoadingSpinner" class="d-flex justify-content-center py-3 d-none">
<div class="spinner-border text-primary" role="status">
<span class="visually-hidden">Loading...</span>
</div>
</div>
<div class="table-responsive">
<table class="table table-striped" id="userActivityTable">
<thead>
<tr>
<th>Date</th>
<th>Start Time</th>
<th>End Time</th>
<th>Duration (Hours)</th>
</tr>
</thead>
<tbody id="userActivityBody">
<!-- Activity data will be populated here -->
</tbody>
</table>
</div>
<div id="modalErrorMessage" class="alert alert-danger d-none mt-3" role="alert">
Error loading activity data.
</div>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Close</button>
</div>
</div>
</div>
</div>
<!-- Toast Notification for Logs -->
<div class="position-fixed bottom-0 end-0 p-3 toast-container-custom">
<div id="logToast" class="toast" role="alert" aria-live="assertive" aria-atomic="true">
<div class="toast-header">
<strong class="me-auto">System Log</strong>
<small class="text-muted" id="logTime"></small>
<button type="button" class="btn-close" data-bs-dismiss="toast" aria-label="Close"></button>
</div>
<div class="toast-body" id="logMessage">
States refreshed successfully.
</div>
</div>
</div>
<!-- Bootstrap JS -->
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/js/bootstrap.bundle.min.js"></script>
<!-- Custom JS Modules -->
<script src="{{ url_for('static', filename='js/dashboard.js') }}" type="module"></script>
</body>
</html>

27
trigger_recalculate.py Executable file
View File

@ -0,0 +1,27 @@
#!/usr/bin/env python
"""
Script to manually trigger recalculation of user work hours
"""
import sys
import functools
from app import create_app
from app.services.work_hours_service import calculate_and_store_real_work_hours
from app.scheduler import job_wrapper
# Create the app instance (same as what Flask does on startup)
app = create_app()
print("Starting manual recalculation of work hours...")
try:
# Call the calculation function exactly as the scheduler would, but with force_recalculate=True
# This ensures we process all events from scratch rather than only new ones
wrapped_job_func = job_wrapper(
functools.partial(calculate_and_store_real_work_hours, app, force_recalculate=True),
app,
'manual_calc_real_work_hours'
)
wrapped_job_func()
print("Recalculation completed successfully.")
except Exception as e:
print(f"Error during recalculation: {e}")
sys.exit(1)

307
workflow_state.md Normal file
View File

@ -0,0 +1,307 @@
# Workflow State & Rules (STM + Rules + Log)
*This file contains the dynamic state, embedded rules, active plan, and log for the current session.*
*It is read and updated frequently by the AI during its operational loop.*
---
## State
*Holds the current status of the workflow.*
```yaml
Phase: IDLE
Status: TASK_COMPLETED
CurrentTaskID: ChangeIdleThreshold
CurrentStep: ""
CurrentItem: ""
```
---
## Plan
*Contains the step-by-step implementation plan generated during the BLUEPRINT phase.*
**Task: ChangeIdleThreshold**
*Change the idle threshold from 1 to 10 minutes across the application to determine when a user is considered inactive.*
* `[✓] Step CIT-1: Update client-side PowerShell script`
* `[✓] Modify `client_tools/report.ps1`:`
* `[✓] Locate the variable `$IdleThresholdMinutes` and change its default value from 1 to 10`
* `[✓] Update any related comments to reflect the new threshold`
* `[✓] Step CIT-2: Update client-side configuration documentation`
* `[✓] Modify `README.md`:`
* `[✓] Update the example `config.env` under "Client-Side Setup" to show `IDLE_THRESHOLD_MINUTES="10"`
* `[✓] Update any related documentation about idle detection to reference 10 minutes instead of 5`
* `[✓] Step CIT-3: Update server-side code references`
* `[✓] Check for any server-side references to the idle threshold value in the Flask application`
* `[✓] Updated the auto_timeout_seconds in app/api/reports.py from 6 minutes to 10 minutes`
* `[✓] Step CIT-4: Update project configuration documentation`
* `[✓] Modify `project_config.md`:`
* `[✓] Update "Idle Detection Logic" to reference 10 minutes instead of 5 minutes`
* `[✓] Step CIT-5: Verify configuration consistency`
* `[✓] Ensure all references to the idle threshold are consistently set to 10 minutes across the application`
* `[✓] Updated app.py fetch_user_activity function to use 10-minute idle threshold`
* `[✓] Updated comments in report.ps1 about periodic reporting`
**Task: ChangeRealWorkHoursToProductionLogic**
*Change the test logic in the real work hours tracking system to production logic, where every consecutive 40 minutes (not 2 minutes) counts as 1 hour (not 3 hours).*
* **Step CRWH-1: Update the work_hours_service.py file**
* Modify app/services/work_hours_service.py:
* Change `if consecutive_working_minutes == 2:` to `if consecutive_working_minutes == 40:`
* Change `logged_hours = 3` to `logged_hours = 1`
* Remove "TEST LOGIC:" prefix from log message at line 158
* Remove "TEST LOGIC" reference in the final completion log message at line 260
* **Step CRWH-2: Update the database schema**
* Modify database_utils/001_create_user_real_work_summary.sql:
* Add the `last_event_completed_block BOOLEAN NOT NULL DEFAULT FALSE` column to the table definition
* **Step CRWH-3: Verify changes**
* Verify that all code changes are consistent
* Make sure all references to 40-minute blocks and real work hours are accurate
**Task: ImplementRealWorkHoursTracking**
*Create `user_real_work_summary` table. Implement an automated, scheduled task within Flask (using a library like APScheduler) to populate this table with 'real work hours' (40 consecutive 'working' minutes = 1 hour), with an optional CLI command for manual triggering/testing. Create a new API endpoint to serve this data. Update the frontend dashboard to display these 'real work hours' instead of the previous duration calculation.*
* **Step IRWHT-1: Database Setup**
* Create a new SQL file (e.g., `database_utils/001_create_user_real_work_summary.sql`) with the `CREATE TABLE IF NOT EXISTS user_real_work_summary (id SERIAL PRIMARY KEY, username VARCHAR(255) NOT NULL, work_date DATE NOT NULL, real_hours_counted INTEGER NOT NULL DEFAULT 0, last_processed_event_id INTEGER, CONSTRAINT uq_user_work_date UNIQUE (username, work_date));` statement.
* Update `app/models.py` to add a new SQLAlchemy model class `UserRealWorkSummary` mapping to this table.
* Update `README.md` to include instructions for applying this new SQL script. Determine if this script should be incorporated into the `flask init-db` command or remain a manual step.
* **Step IRWHT-2 (Revised - In-App Scheduler): Implement Automated Work Hour Calculation Task**
* Add `APScheduler` to `requirements.txt`.
* Create a new Python module for the calculation logic (e.g., `app/services/work_hours_service.py` or `app/tasks/calculator.py`).
* Implement the core function, `calculate_and_store_real_work_hours()`, in this module. This function will:
* Fetch unprocessed `WorkEvent` records (efficiently using `last_processed_event_id` from `UserRealWorkSummary` for each user to avoid reprocessing all events).
* Iterate through these events on a per-user basis, ordered by timestamp.
* Carefully track sequences of 40 consecutive 'working' state events (where each event represents approximately 1 minute of work).
* Perform UPSERT (insert or update) operations on the `UserRealWorkSummary` table to increment `real_hours_counted` and update `last_processed_event_id` for the corresponding `username` and `work_date`.
* Ensure the logic correctly handles the `work_date` derivation from event timestamps and manages `last_processed_event_id` for robust incremental processing.
* Integrate and configure `APScheduler` within the Flask application (e.g., in `app/__init__.py` or a dedicated `app/scheduler.py` module that is initialized by the app factory).
* Define a scheduled job within APScheduler to call the `calculate_and_store_real_work_hours()` function periodically (e.g., every 10-15 minutes. The exact interval to be determined based on desired freshness vs. server load).
* Create a Flask CLI command (e.g., in `app/cli.py`, named `flask process-real-hours`) that directly calls the `calculate_and_store_real_work_hours()` function. This command is for manual execution (testing, initial backfilling, ad-hoc runs).
* **Step IRWHT-3 (Revised): Create New Backend API Endpoint for Real Work Hours**
* In `app/api/reports.py` (or a new relevant API file if preferred, e.g., `app/api/real_time_reports.py`), define a new API endpoint (e.g., `/api/reports/real_work_hours`).
* This endpoint should accept parameters such as `username` (optional filter), `start_date`, and `end_date` (or a period parameter like 'today', 'current_week', 'current_month' for predefined ranges).
* The endpoint's logic will query the `user_real_work_summary` table (using the `UserRealWorkSummary` model) to fetch `real_hours_counted` for the specified users and timeframes.
* It should return the data in a JSON format suitable for frontend consumption (e.g., a list of dictionaries: `[{'user': 'name', 'date': 'YYYY-MM-DD', 'real_hours_counted': X}, ...]` or aggregated as per the period requested).
* **Step IRWHT-4 (Revised): Update Frontend to Display Real Work Hours**
* In `static/js/dashboard.js` (or `tableManager.js` or other relevant JavaScript modules):
* Modify the existing JavaScript functions responsible for fetching and displaying what was previously work duration.
* These functions should now be updated to call the new `/api/reports/real_work_hours` endpoint.
* The data parsing logic within these JavaScript functions must be updated to correctly use the `real_hours_counted` field from the new endpoint's JSON response.
* In `templates/dashboard.html`:
* Update table headers (e.g., from "Duration (Hours)" to "Real Work Hours" or "Focused Work Blocks").
* Update any other descriptive text on the dashboard if necessary to accurately reflect that the new "real work hours" metric is being displayed.
* Ensure the table columns correctly bind to and display the new data structure provided by the updated JavaScript logic.
* **Step IRWHT-5: Documentation Updates**
* Update `README.md` to:
* Add `APScheduler` to the list of dependencies in `requirements.txt` section.
* Describe the new `user_real_work_summary` table, its columns, and its specific purpose in tracking 40-minute focused work blocks.
* Document the automated scheduled task (managed by APScheduler) for processing real work hours, mentioning its approximate frequency and its role in keeping `user_real_work_summary` up-to-date.
* Document the Flask CLI command (`flask process-real-hours`) for manual/testing/backfilling.
* Document the new `/api/reports/real_work_hours` API endpoint, including its accepted parameters (e.g., `username`, `start_date`, `end_date`, `period`) and the structure of its JSON response.
* Update any frontend/dashboard usage descriptions or screenshots to reflect the change in how work hours are presented (now showing "real work hours").
* **Step IRWHT-6: Testing and Verification**
* Thoroughly test the `calculate_and_store_real_work_hours()` function directly and via the CLI command (`flask process-real-hours`). Prepare diverse sample `work_events` data.
* Verify that the `user_real_work_summary` table is populated accurately.
* Test incremental processing using `last_processed_event_id`.
* Verify APScheduler is correctly configured and the scheduled job triggers the calculation function as expected (e.g., by checking logs or observing table updates in a development environment with a short interval).
* Test the new `/api/reports/real_work_hours` API endpoint.
* Verify frontend dashboard updates.
```yaml
Phase: CONSTRUCT
Status: AWAITING_USER_VALIDATION
CurrentTaskID: ImplementRealWorkHoursTracking
CurrentStep: IRWHT-6
CurrentItem: "User Testing and Verification"
```
**Task: OrganizeProjectRootAndSimplifyStartup**
*Clean up the project's root directory by removing unused scripts, reorganizing existing files into appropriate subdirectories, and providing a single, enhanced script for starting the application.*
* `[✓] Step OPRS-1: Analyze Root Directory Files and Plan Relocations/Deletions`
* Identify files in the root directory (e.g., `check_db.py`, `config.env`, `create_db.sql`, `ecosystem.config.js`, `fix_task.cmd`, `report.ps1`, `run_hidden.vbs`, `schedule_task.ps1`, `start_app.sh`).
* Determine which files to keep in root, move to new/existing subdirectories, or delete.
* Specifically:
* Keep in root: `config.env`, `ecosystem.config.js`, `README.md`, `requirements.txt`, `run.py`, `project_config.md`, `workflow_state.md`, `.gitignore`, `.cursorignore`.
* Create `client_tools/` directory. Move `report.ps1`, `schedule_task.ps1`, `run_hidden.vbs` into `client_tools/`.
* Create `database_utils/` directory. Move `create_db.sql` into `database_utils/`.
* Mark `check_db.py` and `fix_task.cmd` for deletion after a brief content review to confirm redundancy/obsolescence.
* `[✓] Step OPRS-2: Review Content of Potentially Unused Scripts`
* Read `check_db.py` to confirm its functionality is covered by `flask init-db` or app's auto-init.
* Read `fix_task.cmd` to understand its purpose and determine if it's still needed.
* `[✓] Step OPRS-3: Implement File Deletions`
* `[✓] Delete `check_db.py` if confirmed redundant.`
* `[✓] Delete `fix_task.cmd` if confirmed obsolete or unused.`
* `[✓] Step OPRS-4: Implement File Relocations`
* `[✓] Create the directory `client_tools`.`
* `[✓] Move `report.ps1` to `client_tools/report.ps1`.`
* `[✓] Move `schedule_task.ps1` to `client_tools/schedule_task.ps1`.`
* `[✓] Move `run_hidden.vbs` to `client_tools/run_hidden.vbs`.`
* `[✓] Create the directory `database_utils`.`
* `[✓] Move `create_db.sql` to `database_utils/create_db.sql`.`
* `[✓] Step OPRS-5: Enhance `start_app.sh` as the Single Startup Script`
* `[✓] Read the current `start_app.sh`.`
* `[✓] Modify `start_app.sh` to:
* `[✓] Check for and activate the virtual environment (`venv`).`
* `[✓] Prompt the user to choose between 'development' or 'production' mode (or detect via an argument).`
* `[✓] If 'development', run `python run.py`.`
* `[✓] If 'production', run `gunicorn -w 4 -b 0.0.0.0:5000 "app:create_app()"`.`
* `[✓] Include basic error checking (e.g., venv not found, `run.py` not found).`
* `[✓] Step OPRS-6: Update `README.md``
* `[✓] Modify the "Project Structure" section to reflect the new `client_tools/` and `database_utils/` directories and the removal/relocation of files.`
* `[✓] Update "Installation Instructions" and "Usage Examples" to refer to `start_app.sh` as the primary way to run the server.`
* `[✓] Ensure client-side setup instructions correctly point to files now in `client_tools/`.
* `[✓] Step OPRS-7: Update `.gitignore``
* `[✓] Ensure `client_tools/` and `database_utils/` are tracked (i.e., not in `.gitignore`).`
* `[✓] Verify `instance/`, `venv/`, `__pycache__/`, `*.pyc` remain ignored.`
* `[✓] Step OPRS-8: Final Review and State Update`
* `[✓] Review all changes for consistency and correctness.`
* `[✓] Update `workflow_state.md` to `Phase: IDLE`, `Status: TASK_COMPLETED` for `OrganizeProjectRootAndSimplifyStartup`.
* `[✓] Step CAPT5-6: Check and update `project_config.md` if it specifies the port. (No hardcoded port found, no change needed).`
* `[✓] Step CAPT5-7: Ask user to try running ./start_app.sh again and confirm functionality on port 5050.`
**Task: DiagnoseAndFixRuntimeModuleError**
*Diagnose and resolve the `ModuleNotFoundError: No module named 'dotenv'` when running `start_app.sh`.*
* `[✓] Step DFRME-1: Log the error and identify the likely cause as missing dependencies in the virtual environment.`
* `[✓] Step DFRME-2: Instruct the user to activate the virtual environment and run `pip install -r requirements.txt` to ensure all dependencies are correctly installed.`
* `[✓] Step DFRME-3: User confirmed re-running `./start_app.sh` did not resolve the issue.`
* `[✓] Step DFRME-4: Verify `python-dotenv` is in `requirements.txt`.`
* `[✓] Read `requirements.txt`. Confirm `python-dotenv` is listed.
* `[✓] Step DFRME-5: Modify `start_app.sh` for more diagnostics.`
* `[✓] Inside the "development mode" block, before `python run.py` or `python3 run.py`:
* `[✓] Add `echo "Which python: $(which python)"`
* `[✓] Add `echo "Which python3: $(which python3)"`
* `[✓] Add `echo "PYTHONPATH: $PYTHONPATH"`
* `[✓] Add `echo "Active Python version: $(python --version || python3 --version)"`
* `[✓] Add `echo "Attempting to import dotenv directly via command line..."`
* `[✓] Add `python -c "import dotenv; print('dotenv path:', dotenv.__file__)" || python3 -c "import dotenv; print('dotenv path:', dotenv.__file__)"`
* `[✓] Step DFRME-6: User re-ran `./start_app.sh` and provided the new diagnostic output.`
* `[✓] Step DFRME-7: Analyze diagnostic output and propose further steps.`
* `[✓] Output shows venv Python is used but still can't import dotenv.`
* `[✓] User explicitly used `venv/bin/pip install -r requirements.txt`.`
* `[✓] User ran `venv/bin/pip show python-dotenv` and provided output.`
* `[✓] User tried `./start_app.sh` again, issue persists.`
* `[✓] Identified Python version mismatch: venv is Python 3.12, but pip installed to a python3.11 site-packages directory within venv.`
* `[✓] Step DFRME-8: Recreate virtual environment with consistent Python version.`
* `[✓] Instruct user to deactivate and delete the current `venv` directory.`
* `[✓] Instruct user to recreate venv using `python3.11 -m venv venv` (user specified `python3.11`).`
* `[✓] Instruct user to activate the new venv.`
* `[✓] Instruct user to run `pip install -r requirements.txt` inside the new venv.`
* `[✓] Step DFRME-9: User re-ran `./start_app.sh`; `dotenv` error resolved, new `ImportError` appeared.`
**Task: FixImportErrorInUtils**
*Resolve the `ImportError: cannot import name 'filter_sql_by_user' from 'app.utils.queries'`.*
* `[✓] Step FIEIU-1: Log the new ImportError.`
* `[✓] Step FIEIU-2: Read `app/utils/queries.py` to check for the definition of `filter_sql_by_user`. (Confirmed it was removed).`
* `[✓] Step FIEIU-3: Read `app/utils/__init__.py` to examine the import statement.`
* `[✓] Step FIEIU-4: Based on findings, either correct the function name in `queries.py` or `__init__.py`, or add the function definition if missing, or adjust the import statement.`
* `[✓] Confirmed `filter_sql_by_user` is obsolete.`
* `[✓] Remove `filter_sql_by_user` from the import statement in `app/utils/__init__.py`.`
* `[✓] Remove `filter_sql_by_user` from the `__all__` list in `app/utils/__init__.py`.`
* `[✓] Step FIEIU-5: User ran `./start_app.sh`; `ImportError` resolved, but port 5000 is in use.`
**Task: SyncFrontendWithBackendLogic**
*Adjust frontend JavaScript to align with backend data availability and reporting frequency, primarily by tuning auto-refresh intervals and verifying state display logic.*
* `[✓] Step SFBL-1: Analyze `static/js/autoRefresh.js`. Determine the current refresh intervals used for fetching user states (via `userStates.js` and `/api/user_states`) and main report data (e.g., daily/weekly/monthly views).`
* `[✓] Step SFBL-2: Modify `static/js/autoRefresh.js` to set the primary refresh interval to 60 seconds (60000 milliseconds). This interval should apply to both user state updates and the main report data tables. This change aligns the frontend refresh rate with the 1-minute reporting interval of `report.ps1`, ensuring the dashboard displays near real-time information. Add comments to the code explaining this alignment.`
* `[✓] Step SFBL-3: Analyze `static/js/userStates.js`. Verify that it correctly interprets the status provided by the `/api/user_states` endpoint and updates the UI accordingly. Confirm there are no frontend-specific assumptions about state transitions or timings that might conflict with the more frequent and reliable data now coming from the client. (This is primarily a check; fixes will only be applied if clear discrepancies or outdated logic are found).`
* `[✓] Step SFBL-4: Analyze `static/js/tableManager.js`. Confirm that it directly displays aggregated data (like `duration_hours`, `first_login_time`) as provided by the backend API endpoints, without performing additional calculations or interpretations that could lead to discrepancies. (This is primarily a check; fixes will only be applied if clear discrepancies or outdated logic are found).`
**Task: EnsureLogoffReport**
*Modify `report.ps1` to reliably send a "stopped" status update when the script terminates, such as during user logoff.*
* `[✓] Step ELR-1: Analyze `report.ps1`. Locate the main monitoring loop (`while ($true)`) and its existing `try...catch` structure.`
* `[✓] Step ELR-2: Add a `finally` block to the `try...catch` structure that encompasses the main loop.`
* `[✓] Step ELR-3: Inside the new `finally` block:`
* `[✓] Add a log message indicating the script is exiting (e.g., "Script ending (e.g., user logoff, task stop, or after error). Attempting to report 'stopped' state.").`
* `[✓] Unconditionally call `Send-StateReport -State "stopped"`.`
* `[✓] Add a log message confirming the final report attempt (e.g., "Final 'stopped' state report attempt made.").`
* `[✓] Add a script end marker log message (e.g., "================ Script Ended (via Finally) ================.")`
* `[✓] Step ELR-4: Modify the existing `catch` block for the main loop:`
* `[✓] Update its log message to indicate that the 'stopped' state will be handled by the `finally` block (e.g., "Critical error in main loop: $_. Script will attempt to report 'stopped' state in finally block.").`
* `[✓] Remove the conditional `Send-StateReport -State "stopped"` call from the `catch` block, as this is now handled by the `finally` block.`
* `[✓] Step ELR-5: Remove the redundant `Write-Log "================ Script Ended Gracefully ================"` line from the end of the script, as the `finally` block now handles the definitive script end logging.`
**Task: AlignReportPs1ToMinuteLoggingAndSpec**
*Modify `report.ps1` to ensure it sends user status ("working" or "stopped") logs approximately every minute, align its default idle threshold with project specifications, and update `README.md` for configuration guidance.*
* `[✓] Step ALIGN-1: In report.ps1, confirm the default for $pollIntervalSeconds is 60. If not, set it to 60. Add/update a comment to clarify its purpose (e.g., "# How often to check user activity (in seconds).").`
* `[✓] Step ALIGN-2: In report.ps1, confirm the default for $reportIntervalMinutes is 1. If not, set it to 1. Add/update a comment to clarify its purpose and how it enables minute-by-minute "working" state reporting (e.g., "# How often to send a status update if state hasn't changed (in minutes). With a 1-minute poll interval, this ensures 'working' state is reported every minute. State changes are reported immediately.").`
* `[✓] Step ALIGN-3: In report.ps1, change the default value of $IdleThresholdMinutes from 15 to 5. This aligns with `project_config.md` ("Idle Detection Logic: Fixed inactivity threshold of 5 minutes") and `README.md` examples. Add/update a comment for this variable (e.g., "# User idle time in minutes before state changes to 'stopped'.").`
* `[✓] Step ALIGN-4: In README.md, update the client-side `config.env` example under "Installation Instructions" -> "Client-Side Setup" -> "Configure" to include `REPORT_INTERVAL_MINUTES="1"`. Ensure `IDLE_THRESHOLD_MINUTES` is shown as "5" and `POLL_INTERVAL_SECONDS` as "60".`
* The `config.env` block in `README.md` should be updated to:
```env
API_ENDPOINT="http://your-server-address:5000/api/report"
IDLE_THRESHOLD_MINUTES="5"
POLL_INTERVAL_SECONDS="60"
REPORT_INTERVAL_MINUTES="1"
```
**Task: AdvancedRefactoringAndDocumentation**
*Refactor exception handling, improve code clarity and naming, and create a comprehensive README.md.*
*Phase 1: Code Refactoring*
* **Sub-Task 1: Refine Exception Handling (REH)**
* `[✓] Step REH-1: Analyze app/api/events.py. Replace generic except Exception blocks with specific exception types (e.g., SQLAlchemyError, ValueError, TypeError). Ensure error responses are informative.`
* `[✓] Step REH-2: Analyze app/api/reports.py. Apply similar specific exception handling. Pay attention to potential errors during database queries or data processing.`
* `[✓] Step REH-3: Analyze app/utils/queries.py and app/utils/formatting.py. Ensure any potential errors are handled gracefully or documented.`
* `[✓] Step REH-4: Analyze run.py and app/__init__.py. Review exception handling during app initialization and configuration.`
* **Sub-Task 2: Enhance Code Clarity and Naming (ECN)** (Concurrent with Sub-Task 1)
* `[✓] Step ECN-1: Review app/api/events.py for clarity, naming, docstrings, and comments.`
* `[✓] Step ECN-2: Review app/api/reports.py for clarity, naming, docstrings, and comments.`
* `[✓] Step ECN-3: Review app/utils/queries.py for clarity, naming, docstrings, and comments.`
* `[✓] Step ECN-4: Review app/utils/formatting.py for clarity, naming, docstrings, and comments.`
* `[✓] Step ECN-5: Review app/views/dashboard.py for clarity, naming, docstrings, and comments.`
* `[✓] Step ECN-6: Review app/models.py for clarity, naming, docstrings, and comments.`
* `[✓] Step ECN-7: Review run.py and app/__init__.py for clarity, naming, docstrings, and comments.`
* `[✓] Step ECN-8: Briefly review static/js/ submodules for major clarity/naming issues.`
*Phase 2: README Creation*
* **Sub-Task 3: Create Comprehensive README.md (DOC)**
* `[✓] Step DOC-1: Delete the existing README.md file.`
* `[✓] Step DOC-2: Create a new README.md file.`
* `[✓] Step DOC-3: Draft "Project Title" and "Brief Project Description".`
* `[✓] Step DOC-4: Draft "Installation Instructions" (Client and Server).`
* `[✓] Step DOC-5: Draft "Usage Examples" (API interaction, dashboard access).`
* `[✓] Step DOC-6: Draft "Structure of the Project".`
* `[✓] Step DOC-7: Draft "Dependencies and Requirements".`
* `[✓] Step DOC-8: Draft "Contributing Guidelines".`
* `[✓] Step DOC-9: Draft "License Information".`
* `[✓] Step DOC-10: Review and refine the complete README.md.`
**Task: ModifyReportingDashboard**
*(Completed Task)*
*Modify the reporting and dashboard to show aggregated active working time duration in simple tables.*
* `[✓] Step Mod-1: Define new SQL queries in app.py to calculate daily, weekly, and monthly working durations per user using LEAD() and JULIANDAY(), aggregating with SUM().`
* `[✓] Step Mod-2: Update Flask endpoints in app.py:`
* `[✓] Step Mod-2.1: Modify /api/reports/daily to use the new daily duration query.`
* `[✓] Step Mod-2.2: Create /api/reports/weekly using a new weekly duration query.`
* `[✓] Step Mod-2.3: Create /api/reports/monthly using a new monthly duration query.`
* `[✓] Step Mod-2.4: Ensure endpoints return JSON data formatted for table display (e.g., list of dicts with user, period, duration_hours).`
* `[✓] Step Mod-3: Update templates/dashboard.html:`
* `[✓] Step Mod-3.1: Remove Chart.js script inclusion and chart-related HTML elements.`
* `[✓] Step Mod-3.2: Add JavaScript to fetch data from the new/updated API endpoints.`
* `[✓] Step Mod-3.3: Create HTML tables to display the fetched duration data (User, Period, Duration).`
**Task: AddExtensiveLogging**
*(Completed Task)*
*Add extensive file-based logging to both the Flask server and the PowerShell client.*
* `[✓] Step Log-1: Configure Flask Logging (`app.py`):`
* `[✓] Step Log-1.1: Import `logging` and `logging.handlers`.`
* `[✓] Step Log-1.2: In `create_app` or equivalent setup location: Configure a `RotatingFileHandler` to write to `