35 Commits

Author SHA1 Message Date
f7c45b8015 [Feature] Add legacy api endpoint to mimic ReferringScenarios (#362)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#362
2026-03-17 19:44:47 +13:00
68aea97013 [Feature] Simple template extension mechanism (#361)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#361
2026-03-16 21:06:20 +13:00
3cc7fa9e8b [Fix] Add Captcha vars to Template (#359)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#359
2026-03-13 11:46:34 +13:00
21f3390a43 [Feature] Add Captchas to avoid spam registrations (#358)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#358
2026-03-13 11:36:48 +13:00
8cdf91c8fb [Fix] Broken Model Creation (#356)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#356
2026-03-12 11:34:14 +13:00
bafbf11322 [Fix] Broken Enzyme Links (#353)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#353
2026-03-12 10:25:47 +13:00
f1a9456d1d [Fix] enviFormer prediction (#352)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#352
2026-03-12 08:49:44 +13:00
e0764126e3 [Fix] Scenario Review Status + Depth issues (#351)
https://envipath.org/api/legacy/package/32de3cf4-e3e6-4168-956e-32fa5ddb0ce1/pathway/1d537657-298c-496b-9e6f-2bec0cbe0678

-> Node.depth can be float for Dummynodes
-> Scenarios in Edge.d3_json were lacking a reviewed flag

Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#351
2026-03-12 08:28:20 +13:00
ef0c45b203 [Fix] Pepper display probability calculation (#349)
Probability of persistent is now calculated to include very persistent.

Reviewed-on: enviPath/enviPy#349
Co-authored-by: Liam Brydon <lbry121@aucklanduni.ac.nz>
Co-committed-by: Liam Brydon <lbry121@aucklanduni.ac.nz>
2026-03-11 19:12:55 +13:00
b737fc93eb [Feature] Search for Permissions, Prep Compound / Structure to be extended, Prep Template overwrites (#347)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#347
2026-03-11 11:27:15 +13:00
d4295c9349 [Fix] bootstrap command now reflects new Scenario/AdditionalInformation structure (#346)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#346
2026-03-07 03:14:28 +13:00
c6ff97694d [Feature] PEPPER in enviPath (#332)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#332
2026-03-06 22:11:22 +13:00
6e00926371 [Feature] Scenario and Additional Information creation via enviPath-python, Add Half Lifes to API Output, Fix source/target ids in legacy API (#340)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#340
2026-03-06 07:20:18 +13:00
81cc612e69 [Feature] Populate Batch Predict Table by CSV (#339)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#339
2026-03-06 03:15:44 +13:00
cc9598775c [Fix] Fix Perm for creating entities (#341)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#341
2026-02-27 03:56:33 +13:00
d2c2e643cb [Fix] Compound Grouping, Identity prediction of enviFormer, Setting params (#337)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#337
2026-02-20 10:14:28 +13:00
0ff046363c [Fix] Fixed failing frontend tests due to renaming (#335)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#335
2026-02-17 03:09:32 +13:00
5150027f0d [Fix] Login via email, prevent Usernames with certain chars 2026-02-16 13:58:06 +01:00
58ab5b33e3 [Fix] Filter Active Users (#314) (#329)
Adding users to a group or setting permissions on a package now filter for active users. Also any inactive members of group/package get marked as such.

<img width="490" alt="{3B906C71-F3AE-41E4-A61C-B8377D79F685}.png" src="attachments/09cf149a-9d7a-4560-8ce7-9f3487527ee2">

Reviewed-on: enviPath/enviPy#329
Co-authored-by: Liam Brydon <lbry121@aucklanduni.ac.nz>
Co-committed-by: Liam Brydon <lbry121@aucklanduni.ac.nz>
2026-02-12 20:20:16 +13:00
73f0202267 [Fix] Filter Scenarios with Parent (#311) (#323)
The scenarios lists both in /scenarios and /package/<id>/scenario no longer show related scenarios (children).
All related scenarios are shown on the scenario page under Related Scenarios if there are any.
<img width="500" alt="{C2D38DED-A402-4A27-A241-BC2302C62A50}.png" src="attachments/1371c177-220c-42d5-94ff-56f9fbab761f">

Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#323
Co-authored-by: Liam Brydon <lbry121@aucklanduni.ac.nz>
Co-committed-by: Liam Brydon <lbry121@aucklanduni.ac.nz>
2026-02-11 23:19:20 +13:00
27c5bad9c5 [Fix] Upgraded ai-lib, temporarily ignore additional validation errors (#328)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#328
2026-02-11 03:49:20 +13:00
5789f20e7f [Feature] Create API Key Authenticaton for v1 API (#327)
Add API key authentication to v1 API
Also includes:
- management command to create keys for users
- Improvements to API tests

Minor:
- more robust way to start docker dev container.

Reviewed-on: enviPath/enviPy#327
Co-authored-by: Tobias O <tobias.olenyi@envipath.com>
Co-committed-by: Tobias O <tobias.olenyi@envipath.com>
2026-02-11 02:29:54 +13:00
c0cfdb9255 [Style] Adds custom name display for timeseries (#320)
<img width="1336" alt="image.png" src="attachments/58e49257-976e-469f-a19e-069c8915c437">

Co-authored-by: jebus <lorsbach@envipath.com>
Reviewed-on: enviPath/enviPy#320
Co-authored-by: Tobias O <tobias.olenyi@envipath.com>
Co-committed-by: Tobias O <tobias.olenyi@envipath.com>
2026-02-04 08:15:16 +13:00
5da8dbc191 [Feature] Timeseries Pathway view (#319)
**Warning depends on Timeseries feature to be merged**

Implements a way to display OECD 301F data on the pathway view.
This is mostly a PoC and needs to be improved once the pathway rendering is updated.

![image.png](/attachments/053965d7-78f7-487a-b5d0-898612708fa3)

Co-authored-by: jebus <lorsbach@envipath.com>
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#319
Co-authored-by: Tobias O <tobias.olenyi@envipath.com>
Co-committed-by: Tobias O <tobias.olenyi@envipath.com>
2026-02-04 05:19:25 +13:00
dc18b73e08 [Feature] Adds timeseries display (#313)
Adds a way to input/display timeseries data to the additional information

Reviewed-on: enviPath/enviPy#313
Reviewed-by: jebus <lorsbach@envipath.com>
Co-authored-by: Tobias O <tobias.olenyi@envipath.com>
Co-committed-by: Tobias O <tobias.olenyi@envipath.com>
2026-02-04 01:01:06 +13:00
d80dfb5ee3 [Feature] Dynamic additional information rendering in frontend (#282)
This implements a version of #274, relying on Pydantics built in JSON schema and JSON rendering.
Requires additional UI tagging in the ai model repo but will remove HTML tags.

Example scenario with filled information: 5882df9c-dae1-4d80-a40e-db4724271456/scenario/3a4d395a-6a6d-4154-8ce3-ced667fceec0

Reviewed-on: enviPath/enviPy#282
Co-authored-by: Tobias O <tobias.olenyi@envipath.com>
Co-committed-by: Tobias O <tobias.olenyi@envipath.com>
2026-01-31 00:44:03 +13:00
9f63a9d4de [Fix] Fixed ObjectDoesNotExist for send_registration_mail, fixed duplicate mail sending (#312)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#312
2026-01-29 20:24:11 +13:00
5565b9cb9e [Fix] UI bugs, Registrations Mail, BTRules Popup, Legacy API fixes (#309)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#309
2026-01-29 11:13:34 +13:00
ab0b5a5186 [Feature] Leftovers after Release (#303)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#303
2026-01-22 10:26:38 +13:00
f905bf21cf [Feature] Update ToS to be more legally safe and sensible (#301)
- Improved ToS content
- Add ToS pointer and academic use note at signup
- Remove legal collection page (unnecessary)

Reviewed-on: enviPath/enviPy#301
Co-authored-by: Tobias O <tobias.olenyi@envipath.com>
Co-committed-by: Tobias O <tobias.olenyi@envipath.com>
2026-01-20 03:18:40 +13:00
1fd993927c [Feature] Check os.environ for ENV_PATH (#300)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#300
2026-01-19 23:41:43 +13:00
2a2fe4f147 Static pages update (#299)
I have updated the static pages following @wicker comments on #275

Co-authored-by: Tobias O <tobias.olenyi@envipath.com>
Reviewed-on: enviPath/enviPy#299
Co-authored-by: Liam Brydon <lbry121@aucklanduni.ac.nz>
Co-committed-by: Liam Brydon <lbry121@aucklanduni.ac.nz>
2026-01-19 23:37:08 +13:00
5f5ae76182 [Fix] Fix Prediction Spinner, ensure proper pathway status is set
Fixes Spinner and status message display on pathway page

Reviewed-on: enviPath/enviPy#291
Co-authored-by: Tobias O <tobias.olenyi@envipath.com>
Co-committed-by: Tobias O <tobias.olenyi@envipath.com>
2026-01-15 23:09:12 +13:00
1c2f70b3b9 [Feature] Add batch-predict to site-map (#285)
Adds batch predict to the site-map but does not give it prominence.
This is to avoid non-experts "accidentally" flooding the system.

Happy to move it to the main menu if better, @jebus?

Reviewed-on: enviPath/enviPy#285
Co-authored-by: Tobias O <tobias.olenyi@envipath.com>
Co-committed-by: Tobias O <tobias.olenyi@envipath.com>
2026-01-15 22:30:31 +13:00
54f8302104 [FIX] Fixed Search Output, Legacy API Model Endpoint, Handle ObjectsDoesNotExists in views (#297)
Co-authored-by: Tim Lorsbach <tim@lorsba.ch>
Reviewed-on: enviPath/enviPy#297
2026-01-15 20:39:54 +13:00
141 changed files with 12971 additions and 2089 deletions

View File

@ -48,11 +48,6 @@ runs:
shell: bash shell: bash
run: | run: |
uv run python scripts/pnpm_wrapper.py install uv run python scripts/pnpm_wrapper.py install
cat << 'EOF' > pnpm-workspace.yaml
onlyBuiltDependencies:
- '@parcel/watcher'
- '@tailwindcss/oxide'
EOF
uv run python scripts/pnpm_wrapper.py run build uv run python scripts/pnpm_wrapper.py run build
- name: Wait for Postgres - name: Wait for Postgres

View File

@ -8,7 +8,7 @@ repos:
- id: end-of-file-fixer - id: end-of-file-fixer
- id: check-yaml - id: check-yaml
- id: check-added-large-files - id: check-added-large-files
exclude: ^static/images/ exclude: ^static/images/|fixtures/
- repo: https://github.com/astral-sh/ruff-pre-commit - repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.13.3 rev: v0.13.3

View File

@ -8,13 +8,12 @@ These instructions will guide you through setting up the project for local devel
- Python 3.11 or later - Python 3.11 or later
- [uv](https://github.com/astral-sh/uv) - Python package manager - [uv](https://github.com/astral-sh/uv) - Python package manager
- **Docker and Docker Compose** - Required for running PostgreSQL database - **Docker and Docker Compose** - Required for running PostgreSQL database and Redis (for async Celery tasks)
- Git - Git
- Make - Make
> **Note:** This application requires PostgreSQL (uses `ArrayField`). Docker is the easiest way to run PostgreSQL locally. > **Note:** This application requires PostgreSQL (uses `ArrayField`). Docker is the easiest way to run PostgreSQL locally.
### 1. Install Dependencies ### 1. Install Dependencies
This project uses `uv` to manage dependencies and `poe-the-poet` for task running. First, [install `uv` if you don't have it yet](https://docs.astral.sh/uv/guides/install-python/). This project uses `uv` to manage dependencies and `poe-the-poet` for task running. First, [install `uv` if you don't have it yet](https://docs.astral.sh/uv/guides/install-python/).
@ -79,25 +78,48 @@ uv run poe bootstrap # Bootstrap data only
uv run poe shell # Open the Django shell uv run poe shell # Open the Django shell
uv run poe build # Build frontend assets and collect static files uv run poe build # Build frontend assets and collect static files
uv run poe clean # Remove database volumes (WARNING: destroys all data) uv run poe clean # Remove database volumes (WARNING: destroys all data)
uv run poe celery # Start Celery worker for async task processing
uv run poe celery-dev # Start database and Celery worker
``` ```
### 4. Async Celery Setup (Optional)
By default, Celery tasks run synchronously (`CELERY_TASK_ALWAYS_EAGER = True`), which means prediction tasks block the HTTP request until completion. To enable asynchronous task processing with live status updates on pathway pages:
1. **Set the Celery flag in your `.env` file:**
```bash
FLAG_CELERY_PRESENT=True
```
2. **Start Redis and Celery worker:**
```bash
uv run poe celery-dev
```
3. **Start the development server** (in another terminal):
```bash
uv run poe dev
```
### Troubleshooting ### Troubleshooting
* **Docker Connection Error:** If you see an error like `open //./pipe/dockerDesktopLinuxEngine: The system cannot find the file specified` (on Windows), it likely means your Docker Desktop application is not running. Please start Docker Desktop and try the command again. - **Docker Connection Error:** If you see an error like `open //./pipe/dockerDesktopLinuxEngine: The system cannot find the file specified` (on Windows), it likely means your Docker Desktop application is not running. Please start Docker Desktop and try the command again.
* **SSH Keys for Git Dependencies:** Some dependencies are installed from private git repositories and require SSH authentication. Ensure your SSH keys are configured correctly for Git. - **SSH Keys for Git Dependencies:** Some dependencies are installed from private git repositories and require SSH authentication. Ensure your SSH keys are configured correctly for Git.
* For a general guide, see [GitHub's official documentation](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent). - For a general guide, see [GitHub's official documentation](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent).
* **Windows Users:** If `uv sync` hangs while fetching git dependencies, you may need to explicitly configure Git to use the Windows OpenSSH client and use the `ssh-agent` to manage your key's passphrase. - **Windows Users:** If `uv sync` hangs while fetching git dependencies, you may need to explicitly configure Git to use the Windows OpenSSH client and use the `ssh-agent` to manage your key's passphrase.
1. **Point Git to the correct SSH executable:**
```powershell
git config --global core.sshCommand "C:/Windows/System32/OpenSSH/ssh.exe"
```
2. **Enable and use the SSH agent:**
1. **Point Git to the correct SSH executable:** ```powershell
```powershell # Run these commands in an administrator PowerShell
git config --global core.sshCommand "C:/Windows/System32/OpenSSH/ssh.exe" Get-Service ssh-agent | Set-Service -StartupType Automatic -PassThru | Start-Service
```
2. **Enable and use the SSH agent:**
```powershell
# Run these commands in an administrator PowerShell
Get-Service ssh-agent | Set-Service -StartupType Automatic -PassThru | Start-Service
# Add your key to the agent. It will prompt for the passphrase once. # Add your key to the agent. It will prompt for the passphrase once.
ssh-add ssh-add
``` ```

0
bridge/__init__.py Normal file
View File

233
bridge/contracts.py Normal file
View File

@ -0,0 +1,233 @@
import enum
from abc import ABC, abstractmethod
from .dto import BuildResult, EnviPyDTO, EvaluationResult, RunResult
class PropertyType(enum.Enum):
"""
Enumeration representing different types of properties.
PropertyType is an Enum class that defines categories or types of properties
based on their weight or nature. It can typically be used when classifying
objects or entities by their weight classification, such as lightweight or heavy.
"""
LIGHTWEIGHT = "lightweight"
HEAVY = "heavy"
class Plugin(ABC):
"""
Defines an abstract base class Plugin to serve as a blueprint for plugins.
This class establishes the structure that all plugin implementations must
follow. It enforces the presence of required methods to ensure consistent
functionality across all derived classes.
"""
@abstractmethod
def identifier(self) -> str:
pass
@abstractmethod
def name(self) -> str:
"""
Represents an abstract method that provides a contract for implementing a method
to return a name as a string. Must be implemented in subclasses.
Name must be unique across all plugins.
Methods
-------
name() -> str
Abstract method to be defined in subclasses, which returns a string
representing a name.
"""
pass
@abstractmethod
def display(self) -> str:
"""
An abstract method that must be implemented by subclasses to display
specific information or behavior. The method ensures that all subclasses
provide their own implementation of the display functionality.
Raises:
NotImplementedError: Raises this error when the method is not implemented
in a subclass.
Returns:
str: A string used in dropdown menus or other user interfaces to display
"""
pass
class Property(Plugin):
@abstractmethod
def requires_rule_packages(self) -> bool:
"""
Defines an abstract method to determine whether rule packages are required.
This method should be implemented by subclasses to specify if they depend
on rule packages for their functioning.
Raises:
NotImplementedError: If the subclass has not implemented this method.
@return: A boolean indicating if rule packages are required.
"""
pass
@abstractmethod
def requires_data_packages(self) -> bool:
"""
Defines an abstract method to determine whether data packages are required.
This method should be implemented by subclasses to specify if they depend
on data packages for their functioning.
Raises:
NotImplementedError: If the subclass has not implemented this method.
Returns:
bool: True if the service requires data packages, False otherwise.
"""
pass
@abstractmethod
def get_type(self) -> PropertyType:
"""
An abstract method that provides the type of property. This method must
be implemented by subclasses to specify the appropriate property type.
Raises:
NotImplementedError: If the method is not implemented by a subclass.
Returns:
PropertyType: The type of the property associated with the implementation.
"""
pass
def is_heavy(self):
"""
Determines if the current property type is heavy.
This method evaluates whether the property type returned from the `get_type()`
method is classified as `HEAVY`. It utilizes the `PropertyType.HEAVY` constant
for this comparison.
Raises:
AttributeError: If the `get_type()` method is not defined or does not return
a valid value.
Returns:
bool: True if the property type is `HEAVY`, otherwise False.
"""
return self.get_type() == PropertyType.HEAVY
@abstractmethod
def build(self, eP: EnviPyDTO, *args, **kwargs) -> BuildResult | None:
"""
Abstract method to prepare and construct a specific build process based on the provided
environment data transfer object (EnviPyDTO). This method should be implemented by
subclasses to handle the particular requirements of the environment.
Parameters:
eP : EnviPyDTO
The data transfer object containing environment details for the build process.
*args :
Additional positional arguments required for the build.
**kwargs :
Additional keyword arguments to offer flexibility and customization for
the build process.
Returns:
BuildResult | None
Returns a BuildResult instance if the build operation succeeds, else returns None.
Raises:
NotImplementedError
If the method is not implemented in a subclass.
"""
pass
@abstractmethod
def run(self, eP: EnviPyDTO, *args, **kwargs) -> RunResult:
"""
Represents an abstract base class for executing a generic process with
provided parameters and returning a standardized result.
Attributes:
None.
Methods:
run(eP, *args, **kwargs):
Executes a task with specified input parameters and optional
arguments, returning the outcome in the form of a RunResult object.
This is an abstract method and must be implemented in subclasses.
Raises:
NotImplementedError: If the subclass does not implement the abstract
method.
Parameters:
eP (EnviPyDTO): The primary object containing information or data required
for processing. Mandatory.
*args: Variable length argument list for additional positional arguments.
**kwargs: Arbitrary keyword arguments for additional options or settings.
Returns:
RunResult: A result object encapsulating the status, output, or details
of the process execution.
"""
pass
@abstractmethod
def evaluate(self, eP: EnviPyDTO, *args, **kwargs) -> EvaluationResult:
"""
Abstract method for evaluating data based on the given input and additional arguments.
This method is intended to be implemented by subclasses and provides
a mechanism to perform an evaluation procedure based on input encapsulated
in an EnviPyDTO object.
Parameters:
eP : EnviPyDTO
The data transfer object containing necessary input for evaluation.
*args : tuple
Additional positional arguments for the evaluation process.
**kwargs : dict
Additional keyword arguments for the evaluation process.
Returns:
EvaluationResult
The result of the evaluation performed by the method.
Raises:
NotImplementedError
If the method is not implemented in the subclass.
"""
pass
@abstractmethod
def build_and_evaluate(self, eP: EnviPyDTO, *args, **kwargs) -> EvaluationResult:
"""
An abstract method designed to build and evaluate a model or system using the provided
environmental parameters and additional optional arguments.
Args:
eP (EnviPyDTO): The environmental parameters required for building and evaluating.
*args: Additional positional arguments.
**kwargs: Additional keyword arguments.
Returns:
EvaluationResult: The result of the evaluation process.
Raises:
NotImplementedError: If the method is not implemented by a subclass.
"""
pass

140
bridge/dto.py Normal file
View File

@ -0,0 +1,140 @@
from dataclasses import dataclass
from typing import Any, List, Optional, Protocol
from envipy_additional_information import EnviPyModel, register
from pydantic import HttpUrl
from utilities.chem import FormatConverter, ProductSet
@dataclass(frozen=True, slots=True)
class Context:
uuid: str
url: str
work_dir: str
class CompoundProto(Protocol):
url: str | None
name: str | None
smiles: str
class RuleProto(Protocol):
url: str
name: str
def apply(self, smiles, *args, **kwargs): ...
class ReactionProto(Protocol):
url: str
name: str
rules: List[RuleProto]
class EnviPyDTO(Protocol):
def get_context(self) -> Context: ...
def get_compounds(self) -> List[CompoundProto]: ...
def get_reactions(self) -> List[ReactionProto]: ...
def get_rules(self) -> List[RuleProto]: ...
@staticmethod
def standardize(smiles, remove_stereo=False, canonicalize_tautomers=False): ...
@staticmethod
def apply(
smiles: str,
smirks: str,
preprocess_smiles: bool = True,
bracketize: bool = True,
standardize: bool = True,
kekulize: bool = True,
remove_stereo: bool = True,
reactant_filter_smarts: str | None = None,
product_filter_smarts: str | None = None,
) -> List["ProductSet"]: ...
class PredictedProperty(EnviPyModel):
pass
@register("buildresult")
class BuildResult(EnviPyModel):
data: dict[str, Any] | List[dict[str, Any]] | None
@register("runresult")
class RunResult(EnviPyModel):
producer: HttpUrl
description: Optional[str] = None
result: PredictedProperty | List[PredictedProperty]
@register("evaluationresult")
class EvaluationResult(EnviPyModel):
data: dict[str, Any] | List[dict[str, Any]] | None
class BaseDTO(EnviPyDTO):
def __init__(
self,
uuid: str,
url: str,
work_dir: str,
compounds: List[CompoundProto],
reactions: List[ReactionProto],
rules: List[RuleProto],
):
self.uuid = uuid
self.url = url
self.work_dir = work_dir
self.compounds = compounds
self.reactions = reactions
self.rules = rules
def get_context(self) -> Context:
return Context(uuid=self.uuid, url=self.url, work_dir=self.work_dir)
def get_compounds(self) -> List[CompoundProto]:
return self.compounds
def get_reactions(self) -> List[ReactionProto]:
return self.reactions
def get_rules(self) -> List[RuleProto]:
return self.rules
@staticmethod
def standardize(smiles, remove_stereo=False, canonicalize_tautomers=False):
return FormatConverter.standardize(
smiles, remove_stereo=remove_stereo, canonicalize_tautomers=canonicalize_tautomers
)
@staticmethod
def apply(
smiles: str,
smirks: str,
preprocess_smiles: bool = True,
bracketize: bool = True,
standardize: bool = True,
kekulize: bool = True,
remove_stereo: bool = True,
reactant_filter_smarts: str | None = None,
product_filter_smarts: str | None = None,
) -> List["ProductSet"]:
return FormatConverter.apply(
smiles,
smirks,
preprocess_smiles,
bracketize,
standardize,
kekulize,
remove_stereo,
reactant_filter_smarts,
product_filter_smarts,
)

View File

@ -1,6 +1,6 @@
services: services:
db: db:
image: postgres:15 image: postgres:18
container_name: envipath-postgres container_name: envipath-postgres
environment: environment:
POSTGRES_USER: postgres POSTGRES_USER: postgres
@ -9,12 +9,18 @@ services:
ports: ports:
- "5432:5432" - "5432:5432"
volumes: volumes:
- postgres_data:/var/lib/postgresql/data - postgres_data:/var/lib/postgresql
healthcheck: healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"] test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s interval: 5s
timeout: 5s timeout: 5s
retries: 5 retries: 5
redis:
image: redis:7-alpine
container_name: envipath-redis
ports:
- "6379:6379"
volumes: volumes:
postgres_data: postgres_data:

View File

@ -14,14 +14,15 @@ import os
from pathlib import Path from pathlib import Path
from dotenv import load_dotenv from dotenv import load_dotenv
from envipy_plugins import Classifier, Property, Descriptor
from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier from sklearn.tree import DecisionTreeClassifier
# Build paths inside the project like this: BASE_DIR / 'subdir'. # Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent BASE_DIR = Path(__file__).resolve().parent.parent
load_dotenv(BASE_DIR / ".env", override=False) ENV_PATH = os.environ.get("ENV_PATH", BASE_DIR / ".env")
print(f"Loading env from {ENV_PATH}")
load_dotenv(ENV_PATH, override=False)
# Quick-start development settings - unsuitable for production # Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/4.2/howto/deployment/checklist/ # See https://docs.djangoproject.com/en/4.2/howto/deployment/checklist/
@ -50,7 +51,7 @@ INSTALLED_APPS = [
# Custom # Custom
"epapi", # API endpoints (v1, etc.) "epapi", # API endpoints (v1, etc.)
"epdb", "epdb",
# "migration", "migration",
] ]
TENANT = os.environ.get("TENANT", "public") TENANT = os.environ.get("TENANT", "public")
@ -91,10 +92,19 @@ if os.environ.get("REGISTRATION_MANDATORY", False) == "True":
ROOT_URLCONF = "envipath.urls" ROOT_URLCONF = "envipath.urls"
TEMPLATE_DIRS = [
os.path.join(BASE_DIR, "templates"),
]
# If we have a non-public tenant, we might need to overwrite some templates
# search TENANT folder first...
if TENANT != "public":
TEMPLATE_DIRS.insert(0, os.path.join(BASE_DIR, TENANT, "templates"))
TEMPLATES = [ TEMPLATES = [
{ {
"BACKEND": "django.template.backends.django.DjangoTemplates", "BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": (os.path.join(BASE_DIR, "templates"),), "DIRS": TEMPLATE_DIRS,
"APP_DIRS": True, "APP_DIRS": True,
"OPTIONS": { "OPTIONS": {
"context_processors": [ "context_processors": [
@ -126,6 +136,13 @@ DATABASES = {
} }
} }
if os.environ.get("USE_TEMPLATE_DB", False) == "True":
DATABASES["default"]["TEST"] = {
"NAME": f"test_{os.environ['TEMPLATE_DB']}",
"TEMPLATE": os.environ["TEMPLATE_DB"],
}
# Password validation # Password validation
# https://docs.djangoproject.com/en/4.2/ref/settings/#auth-password-validators # https://docs.djangoproject.com/en/4.2/ref/settings/#auth-password-validators
@ -309,22 +326,19 @@ DEFAULT_MODEL_PARAMS = {
"num_chains": 10, "num_chains": 10,
} }
DEFAULT_MAX_NUMBER_OF_NODES = 30 DEFAULT_MAX_NUMBER_OF_NODES = 50
DEFAULT_MAX_DEPTH = 5 DEFAULT_MAX_DEPTH = 8
DEFAULT_MODEL_THRESHOLD = 0.25 DEFAULT_MODEL_THRESHOLD = 0.25
# Loading Plugins # Loading Plugins
PLUGINS_ENABLED = os.environ.get("PLUGINS_ENABLED", "False") == "True" PLUGINS_ENABLED = os.environ.get("PLUGINS_ENABLED", "False") == "True"
if PLUGINS_ENABLED: BASE_PLUGINS = [
from utilities.plugin import discover_plugins "pepper.PEPPER",
]
CLASSIFIER_PLUGINS = discover_plugins(_cls=Classifier) CLASSIFIER_PLUGINS = {}
PROPERTY_PLUGINS = discover_plugins(_cls=Property) PROPERTY_PLUGINS = {}
DESCRIPTOR_PLUGINS = discover_plugins(_cls=Descriptor) DESCRIPTOR_PLUGINS = {}
else:
CLASSIFIER_PLUGINS = {}
PROPERTY_PLUGINS = {}
DESCRIPTOR_PLUGINS = {}
SENTRY_ENABLED = os.environ.get("SENTRY_ENABLED", "False") == "True" SENTRY_ENABLED = os.environ.get("SENTRY_ENABLED", "False") == "True"
if SENTRY_ENABLED: if SENTRY_ENABLED:
@ -394,3 +408,9 @@ if MS_ENTRA_ENABLED:
# Site ID 10 -> beta.envipath.org # Site ID 10 -> beta.envipath.org
MATOMO_SITE_ID = os.environ.get("MATOMO_SITE_ID", "10") MATOMO_SITE_ID = os.environ.get("MATOMO_SITE_ID", "10")
# CAP
CAP_ENABLED = os.environ.get("CAP_ENABLED", "False") == "True"
CAP_API_BASE = os.environ.get("CAP_API_BASE", None)
CAP_SITE_KEY = os.environ.get("CAP_SITE_KEY", None)
CAP_SECRET_KEY = os.environ.get("CAP_SECRET_KEY", None)

View File

@ -0,0 +1 @@
"""Tests for epapi utility modules."""

View File

@ -0,0 +1,218 @@
"""
Tests for validation error utilities.
Tests the format_validation_error() and handle_validation_error() functions
that transform Pydantic validation errors into user-friendly messages.
"""
from django.test import TestCase, tag
import json
from pydantic import BaseModel, ValidationError, field_validator
from typing import Literal
from ninja.errors import HttpError
from epapi.utils.validation_errors import format_validation_error, handle_validation_error
@tag("api", "utils")
class ValidationErrorUtilityTests(TestCase):
"""Test validation error utility functions."""
def test_format_missing_field_error(self):
"""Test formatting of missing required field error."""
# Create a model with required field
class TestModel(BaseModel):
required_field: str
# Trigger validation error
try:
TestModel()
except ValidationError as e:
errors = e.errors()
self.assertEqual(len(errors), 1)
formatted = format_validation_error(errors[0])
self.assertEqual(formatted, "This field is required")
def test_format_enum_error(self):
"""Test formatting of enum validation error."""
class TestModel(BaseModel):
status: Literal["active", "inactive"]
try:
TestModel(status="invalid")
except ValidationError as e:
errors = e.errors()
self.assertEqual(len(errors), 1)
formatted = format_validation_error(errors[0])
# Literal errors get formatted as "Please enter ..." with the valid options
self.assertIn("Please enter", formatted)
self.assertIn("active", formatted)
self.assertIn("inactive", formatted)
def test_format_type_errors(self):
"""Test formatting of type validation errors (string, int, float)."""
test_cases = [
# (field_type, invalid_value, expected_message)
# Note: We don't check exact error_type as Pydantic may use different types
# (e.g., int_type vs int_parsing) but we verify the formatted message is correct
(str, 123, "Please enter a valid string"),
(int, "not_a_number", "Please enter a valid int"),
(float, "not_a_float", "Please enter a valid float"),
]
for field_type, invalid_value, expected_message in test_cases:
with self.subTest(field_type=field_type.__name__):
class TestModel(BaseModel):
field: field_type
try:
TestModel(field=invalid_value)
except ValidationError as e:
errors = e.errors()
self.assertEqual(len(errors), 1)
formatted = format_validation_error(errors[0])
self.assertEqual(formatted, expected_message)
def test_format_value_error(self):
"""Test formatting of value error from custom validator."""
class TestModel(BaseModel):
age: int
@field_validator("age")
@classmethod
def validate_age(cls, v):
if v < 0:
raise ValueError("Age must be positive")
return v
try:
TestModel(age=-5)
except ValidationError as e:
errors = e.errors()
self.assertEqual(len(errors), 1)
formatted = format_validation_error(errors[0])
self.assertEqual(formatted, "Age must be positive")
def test_format_unknown_error_type_fallback(self):
"""Test that unknown error types fall back to default formatting."""
# Mock an error with an unknown type
mock_error = {
"type": "unknown_custom_type",
"msg": "Input should be a valid email address",
"ctx": {},
}
formatted = format_validation_error(mock_error)
# Should use the else branch which does replacements on the message
self.assertEqual(formatted, "Please enter a valid email address")
def test_handle_validation_error_structure(self):
"""Test that handle_validation_error raises HttpError with correct structure."""
class TestModel(BaseModel):
name: str
count: int
try:
TestModel(name=123, count="invalid")
except ValidationError as e:
# handle_validation_error should raise HttpError
with self.assertRaises(HttpError) as context:
handle_validation_error(e)
http_error = context.exception
self.assertEqual(http_error.status_code, 400)
# Parse the JSON from the error message
error_data = json.loads(http_error.message)
# Check structure
self.assertEqual(error_data["type"], "validation_error")
self.assertIn("field_errors", error_data)
self.assertIn("message", error_data)
self.assertEqual(error_data["message"], "Please correct the errors below")
# Check that both fields have errors
self.assertIn("name", error_data["field_errors"])
self.assertIn("count", error_data["field_errors"])
def test_handle_validation_error_no_pydantic_internals(self):
"""Test that handle_validation_error doesn't expose Pydantic internals."""
class TestModel(BaseModel):
email: str
try:
TestModel(email=123)
except ValidationError as e:
with self.assertRaises(HttpError) as context:
handle_validation_error(e)
http_error = context.exception
error_data = json.loads(http_error.message)
error_str = json.dumps(error_data)
# Ensure no Pydantic internals are exposed
self.assertNotIn("pydantic", error_str.lower())
self.assertNotIn("https://errors.pydantic.dev", error_str)
self.assertNotIn("loc", error_str)
def test_handle_validation_error_user_friendly_messages(self):
"""Test that all error messages are user-friendly."""
class TestModel(BaseModel):
name: str
age: int
status: Literal["active", "inactive"]
try:
TestModel(name=123, status="invalid") # Multiple errors
except ValidationError as e:
with self.assertRaises(HttpError) as context:
handle_validation_error(e)
http_error = context.exception
error_data = json.loads(http_error.message)
# All messages should be user-friendly (contain "Please" or "This field")
for field, messages in error_data["field_errors"].items():
for message in messages:
# User-friendly messages start with "Please" or "This field"
self.assertTrue(
message.startswith("Please") or message.startswith("This field"),
f"Message '{message}' is not user-friendly",
)
def test_handle_validation_error_multiple_errors_same_field(self):
"""Test handling multiple validation errors for the same field."""
class TestModel(BaseModel):
value: int
@field_validator("value")
@classmethod
def validate_range(cls, v):
if v < 0:
raise ValueError("Must be non-negative")
if v > 100:
raise ValueError("Must be at most 100")
return v
# Test with string (type error) - this will fail before the validator runs
try:
TestModel(value="invalid")
except ValidationError as e:
with self.assertRaises(HttpError) as context:
handle_validation_error(e)
http_error = context.exception
error_data = json.loads(http_error.message)
# Should have error for 'value' field
self.assertIn("value", error_data["field_errors"])
self.assertIsInstance(error_data["field_errors"]["value"], list)
self.assertGreater(len(error_data["field_errors"]["value"]), 0)

View File

@ -0,0 +1,446 @@
"""
Tests for Additional Information API endpoints.
Tests CRUD operations on scenario additional information including the new PATCH endpoint.
"""
from django.test import TestCase, tag
import json
from uuid import uuid4
from epdb.logic import PackageManager, UserManager
from epdb.models import Scenario
@tag("api", "additional_information")
class AdditionalInformationAPITests(TestCase):
"""Test additional information API endpoints."""
@classmethod
def setUpTestData(cls):
"""Set up test data: user, package, and scenario."""
cls.user = UserManager.create_user(
"ai-test-user",
"ai-test@envipath.com",
"SuperSafe",
set_setting=False,
add_to_group=False,
is_active=True,
)
cls.other_user = UserManager.create_user(
"ai-other-user",
"ai-other@envipath.com",
"SuperSafe",
set_setting=False,
add_to_group=False,
is_active=True,
)
cls.package = PackageManager.create_package(
cls.user, "AI Test Package", "Test package for additional information"
)
# Package owned by other_user (no access for cls.user)
cls.other_package = PackageManager.create_package(
cls.other_user, "Other Package", "Package without access"
)
# Create a scenario for testing
cls.scenario = Scenario.objects.create(
package=cls.package,
name="Test Scenario",
description="Test scenario for additional information tests",
scenario_type="biodegradation",
scenario_date="2024-01-01",
)
cls.other_scenario = Scenario.objects.create(
package=cls.other_package,
name="Other Scenario",
description="Scenario in package without access",
scenario_type="biodegradation",
scenario_date="2024-01-01",
)
def test_list_all_schemas(self):
"""Test GET /api/v1/information/schema/ returns all schemas."""
self.client.force_login(self.user)
response = self.client.get("/api/v1/information/schema/")
self.assertEqual(response.status_code, 200)
data = response.json()
self.assertIsInstance(data, dict)
# Should have multiple schemas
self.assertGreater(len(data), 0)
# Each schema should have RJSF format
for name, schema in data.items():
self.assertIn("schema", schema)
self.assertIn("uiSchema", schema)
self.assertIn("formData", schema)
self.assertIn("groups", schema)
def test_get_specific_schema(self):
"""Test GET /api/v1/information/schema/{model_name}/ returns specific schema."""
self.client.force_login(self.user)
# Assuming 'temperature' is a valid model
response = self.client.get("/api/v1/information/schema/temperature/")
self.assertEqual(response.status_code, 200)
data = response.json()
self.assertIn("schema", data)
self.assertIn("uiSchema", data)
def test_get_nonexistent_schema_returns_404(self):
"""Test GET for non-existent schema returns 404."""
self.client.force_login(self.user)
response = self.client.get("/api/v1/information/schema/nonexistent/")
self.assertEqual(response.status_code, 404)
def test_list_scenario_information_empty(self):
"""Test GET /api/v1/scenario/{uuid}/information/ returns empty list initially."""
self.client.force_login(self.user)
response = self.client.get(f"/api/v1/scenario/{self.scenario.uuid}/information/")
self.assertEqual(response.status_code, 200)
data = response.json()
self.assertIsInstance(data, list)
self.assertEqual(len(data), 0)
def test_create_additional_information(self):
"""Test POST creates additional information."""
self.client.force_login(self.user)
# Create temperature information (assuming temperature model exists)
payload = {"interval": {"start": 20, "end": 25}}
response = self.client.post(
f"/api/v1/scenario/{self.scenario.uuid}/information/temperature/",
data=json.dumps(payload),
content_type="application/json",
)
self.assertEqual(response.status_code, 200)
data = response.json()
self.assertEqual(data["status"], "created")
self.assertIn("uuid", data)
self.assertIsNotNone(data["uuid"])
def test_create_with_invalid_data_returns_400(self):
"""Test POST with invalid data returns 400 with validation errors."""
self.client.force_login(self.user)
# Invalid data (missing required fields or wrong types)
payload = {"invalid_field": "value"}
response = self.client.post(
f"/api/v1/scenario/{self.scenario.uuid}/information/temperature/",
data=json.dumps(payload),
content_type="application/json",
)
self.assertEqual(response.status_code, 400)
data = response.json()
# Should have validation error details in 'detail' field
self.assertIn("detail", data)
def test_validation_errors_are_user_friendly(self):
"""Test that validation errors are user-friendly and field-specific."""
self.client.force_login(self.user)
# Invalid data - wrong type (string instead of number in interval)
payload = {"interval": {"start": "not_a_number", "end": 25}}
response = self.client.post(
f"/api/v1/scenario/{self.scenario.uuid}/information/temperature/",
data=json.dumps(payload),
content_type="application/json",
)
self.assertEqual(response.status_code, 400)
data = response.json()
# Parse the error response - Django Ninja wraps errors in 'detail'
error_str = data.get("detail") or data.get("error")
self.assertIsNotNone(error_str, "Response should contain error details")
# Parse the JSON error string
error_data = json.loads(error_str)
# Check structure
self.assertEqual(error_data.get("type"), "validation_error")
self.assertIn("field_errors", error_data)
self.assertIn("message", error_data)
# Ensure error messages are user-friendly (no Pydantic URLs or technical jargon)
error_str = json.dumps(error_data)
self.assertNotIn("pydantic", error_str.lower())
self.assertNotIn("https://errors.pydantic.dev", error_str)
self.assertNotIn("loc", error_str) # No technical field like 'loc'
# Check that error message is helpful
self.assertIn("Please", error_data["message"]) # User-friendly language
def test_patch_additional_information(self):
"""Test PATCH updates existing additional information."""
self.client.force_login(self.user)
# First create an item
create_payload = {"interval": {"start": 20, "end": 25}}
create_response = self.client.post(
f"/api/v1/scenario/{self.scenario.uuid}/information/temperature/",
data=json.dumps(create_payload),
content_type="application/json",
)
item_uuid = create_response.json()["uuid"]
# Then update it with PATCH
update_payload = {"interval": {"start": 30, "end": 35}}
patch_response = self.client.patch(
f"/api/v1/scenario/{self.scenario.uuid}/information/item/{item_uuid}/",
data=json.dumps(update_payload),
content_type="application/json",
)
self.assertEqual(patch_response.status_code, 200)
data = patch_response.json()
self.assertEqual(data["status"], "updated")
self.assertEqual(data["uuid"], item_uuid) # UUID preserved
# Verify the data was updated
list_response = self.client.get(f"/api/v1/scenario/{self.scenario.uuid}/information/")
items = list_response.json()
self.assertEqual(len(items), 1)
updated_item = items[0]
self.assertEqual(updated_item["uuid"], item_uuid)
self.assertEqual(updated_item["data"]["interval"]["start"], 30)
self.assertEqual(updated_item["data"]["interval"]["end"], 35)
def test_patch_nonexistent_item_returns_404(self):
"""Test PATCH on non-existent item returns 404."""
self.client.force_login(self.user)
fake_uuid = str(uuid4())
payload = {"interval": {"start": 30, "end": 35}}
response = self.client.patch(
f"/api/v1/scenario/{self.scenario.uuid}/information/item/{fake_uuid}/",
data=json.dumps(payload),
content_type="application/json",
)
self.assertEqual(response.status_code, 404)
def test_patch_with_invalid_data_returns_400(self):
"""Test PATCH with invalid data returns 400."""
self.client.force_login(self.user)
# First create an item
create_payload = {"interval": {"start": 20, "end": 25}}
create_response = self.client.post(
f"/api/v1/scenario/{self.scenario.uuid}/information/temperature/",
data=json.dumps(create_payload),
content_type="application/json",
)
item_uuid = create_response.json()["uuid"]
# Try to update with invalid data
invalid_payload = {"invalid_field": "value"}
patch_response = self.client.patch(
f"/api/v1/scenario/{self.scenario.uuid}/information/item/{item_uuid}/",
data=json.dumps(invalid_payload),
content_type="application/json",
)
self.assertEqual(patch_response.status_code, 400)
def test_patch_validation_errors_are_user_friendly(self):
"""Test that PATCH validation errors are user-friendly and field-specific."""
self.client.force_login(self.user)
# First create an item
create_payload = {"interval": {"start": 20, "end": 25}}
create_response = self.client.post(
f"/api/v1/scenario/{self.scenario.uuid}/information/temperature/",
data=json.dumps(create_payload),
content_type="application/json",
)
item_uuid = create_response.json()["uuid"]
# Update with invalid data - wrong type (string instead of number in interval)
invalid_payload = {"interval": {"start": "not_a_number", "end": 25}}
patch_response = self.client.patch(
f"/api/v1/scenario/{self.scenario.uuid}/information/item/{item_uuid}/",
data=json.dumps(invalid_payload),
content_type="application/json",
)
self.assertEqual(patch_response.status_code, 400)
data = patch_response.json()
# Parse the error response - Django Ninja wraps errors in 'detail'
error_str = data.get("detail") or data.get("error")
self.assertIsNotNone(error_str, "Response should contain error details")
# Parse the JSON error string
error_data = json.loads(error_str)
# Check structure
self.assertEqual(error_data.get("type"), "validation_error")
self.assertIn("field_errors", error_data)
self.assertIn("message", error_data)
# Ensure error messages are user-friendly (no Pydantic URLs or technical jargon)
error_str = json.dumps(error_data)
self.assertNotIn("pydantic", error_str.lower())
self.assertNotIn("https://errors.pydantic.dev", error_str)
self.assertNotIn("loc", error_str) # No technical field like 'loc'
# Check that error message is helpful
self.assertIn("Please", error_data["message"]) # User-friendly language
def test_delete_additional_information(self):
"""Test DELETE removes additional information."""
self.client.force_login(self.user)
# Create an item
create_payload = {"interval": {"start": 20, "end": 25}}
create_response = self.client.post(
f"/api/v1/scenario/{self.scenario.uuid}/information/temperature/",
data=json.dumps(create_payload),
content_type="application/json",
)
item_uuid = create_response.json()["uuid"]
# Delete it
delete_response = self.client.delete(
f"/api/v1/scenario/{self.scenario.uuid}/information/item/{item_uuid}/"
)
self.assertEqual(delete_response.status_code, 200)
data = delete_response.json()
self.assertEqual(data["status"], "deleted")
# Verify deletion
list_response = self.client.get(f"/api/v1/scenario/{self.scenario.uuid}/information/")
items = list_response.json()
self.assertEqual(len(items), 0)
def test_delete_nonexistent_item_returns_404(self):
"""Test DELETE on non-existent item returns 404."""
self.client.force_login(self.user)
fake_uuid = str(uuid4())
response = self.client.delete(
f"/api/v1/scenario/{self.scenario.uuid}/information/item/{fake_uuid}/"
)
self.assertEqual(response.status_code, 404)
def test_multiple_items_crud(self):
"""Test creating, updating, and deleting multiple items."""
self.client.force_login(self.user)
# Create first item
item1_payload = {"interval": {"start": 20, "end": 25}}
response1 = self.client.post(
f"/api/v1/scenario/{self.scenario.uuid}/information/temperature/",
data=json.dumps(item1_payload),
content_type="application/json",
)
item1_uuid = response1.json()["uuid"]
# Create second item (different type if available, or same type)
item2_payload = {"interval": {"start": 30, "end": 35}}
response2 = self.client.post(
f"/api/v1/scenario/{self.scenario.uuid}/information/temperature/",
data=json.dumps(item2_payload),
content_type="application/json",
)
item2_uuid = response2.json()["uuid"]
# Verify both exist
list_response = self.client.get(f"/api/v1/scenario/{self.scenario.uuid}/information/")
items = list_response.json()
self.assertEqual(len(items), 2)
# Update first item
update_payload = {"interval": {"start": 15, "end": 20}}
self.client.patch(
f"/api/v1/scenario/{self.scenario.uuid}/information/item/{item1_uuid}/",
data=json.dumps(update_payload),
content_type="application/json",
)
# Delete second item
self.client.delete(f"/api/v1/scenario/{self.scenario.uuid}/information/item/{item2_uuid}/")
# Verify final state: one item with updated data
list_response = self.client.get(f"/api/v1/scenario/{self.scenario.uuid}/information/")
items = list_response.json()
self.assertEqual(len(items), 1)
self.assertEqual(items[0]["uuid"], item1_uuid)
self.assertEqual(items[0]["data"]["interval"]["start"], 15)
def test_list_info_denied_without_permission(self):
"""User cannot list info for scenario in package they don't have access to"""
self.client.force_login(self.user)
response = self.client.get(f"/api/v1/scenario/{self.other_scenario.uuid}/information/")
self.assertEqual(response.status_code, 403)
def test_add_info_denied_without_permission(self):
"""User cannot add info to scenario in package they don't have access to"""
self.client.force_login(self.user)
payload = {"interval": {"start": 25, "end": 30}}
response = self.client.post(
f"/api/v1/scenario/{self.other_scenario.uuid}/information/temperature/",
json.dumps(payload),
content_type="application/json",
)
self.assertEqual(response.status_code, 403)
def test_update_info_denied_without_permission(self):
"""User cannot update info in scenario they don't have access to"""
self.client.force_login(self.other_user)
# First create an item as other_user
create_payload = {"interval": {"start": 20, "end": 25}}
create_response = self.client.post(
f"/api/v1/scenario/{self.other_scenario.uuid}/information/temperature/",
data=json.dumps(create_payload),
content_type="application/json",
)
item_uuid = create_response.json()["uuid"]
# Try to update as user (who doesn't have access)
self.client.force_login(self.user)
update_payload = {"interval": {"start": 30, "end": 35}}
response = self.client.patch(
f"/api/v1/scenario/{self.other_scenario.uuid}/information/item/{item_uuid}/",
data=json.dumps(update_payload),
content_type="application/json",
)
self.assertEqual(response.status_code, 403)
def test_delete_info_denied_without_permission(self):
"""User cannot delete info from scenario they don't have access to"""
self.client.force_login(self.other_user)
# First create an item as other_user
create_payload = {"interval": {"start": 20, "end": 25}}
create_response = self.client.post(
f"/api/v1/scenario/{self.other_scenario.uuid}/information/temperature/",
data=json.dumps(create_payload),
content_type="application/json",
)
item_uuid = create_response.json()["uuid"]
# Try to delete as user (who doesn't have access)
self.client.force_login(self.user)
response = self.client.delete(
f"/api/v1/scenario/{self.other_scenario.uuid}/information/item/{item_uuid}/"
)
self.assertEqual(response.status_code, 403)
def test_nonexistent_scenario_returns_404(self):
"""Test operations on non-existent scenario return 404."""
self.client.force_login(self.user)
fake_uuid = uuid4()
response = self.client.get(f"/api/v1/scenario/{fake_uuid}/information/")
self.assertEqual(response.status_code, 404)

View File

@ -261,13 +261,6 @@ class GlobalCompoundListPermissionTest(APIPermissionTestBase):
self.assertEqual(response.status_code, 200) self.assertEqual(response.status_code, 200)
payload = response.json() payload = response.json()
# user2 should see compounds from:
# - reviewed_package (public)
# - unreviewed_package_read (READ permission)
# - unreviewed_package_write (WRITE permission)
# - unreviewed_package_all (ALL permission)
# - group_package (via group membership)
# Total: 5 compounds
self.assertEqual(payload["total_items"], 5) self.assertEqual(payload["total_items"], 5)
visible_uuids = {item["uuid"] for item in payload["items"]} visible_uuids = {item["uuid"] for item in payload["items"]}
@ -303,54 +296,6 @@ class GlobalCompoundListPermissionTest(APIPermissionTestBase):
# user1 owns all packages, so sees all compounds # user1 owns all packages, so sees all compounds
self.assertEqual(payload["total_items"], 7) self.assertEqual(payload["total_items"], 7)
def test_read_permission_allows_viewing(self):
"""READ permission allows viewing compounds."""
self.client.force_login(self.user2)
response = self.client.get(self.ENDPOINT)
self.assertEqual(response.status_code, 200)
payload = response.json()
# Check that read_compound is included
uuids = [item["uuid"] for item in payload["items"]]
self.assertIn(str(self.read_compound.uuid), uuids)
def test_write_permission_allows_viewing(self):
"""WRITE permission also allows viewing compounds."""
self.client.force_login(self.user2)
response = self.client.get(self.ENDPOINT)
self.assertEqual(response.status_code, 200)
payload = response.json()
# Check that write_compound is included
uuids = [item["uuid"] for item in payload["items"]]
self.assertIn(str(self.write_compound.uuid), uuids)
def test_all_permission_allows_viewing(self):
"""ALL permission allows viewing compounds."""
self.client.force_login(self.user2)
response = self.client.get(self.ENDPOINT)
self.assertEqual(response.status_code, 200)
payload = response.json()
# Check that all_compound is included
uuids = [item["uuid"] for item in payload["items"]]
self.assertIn(str(self.all_compound.uuid), uuids)
def test_group_permission_allows_viewing(self):
"""Group membership grants access to group-permitted packages."""
self.client.force_login(self.user2)
response = self.client.get(self.ENDPOINT)
self.assertEqual(response.status_code, 200)
payload = response.json()
# Check that group_compound is included
uuids = [item["uuid"] for item in payload["items"]]
self.assertIn(str(self.group_compound.uuid), uuids)
@tag("api", "end2end") @tag("api", "end2end")
class PackageScopedCompoundListPermissionTest(APIPermissionTestBase): class PackageScopedCompoundListPermissionTest(APIPermissionTestBase):

View File

@ -134,7 +134,7 @@ class BaseTestAPIGetPaginated:
f"({self.total_reviewed} <= {self.default_page_size})" f"({self.total_reviewed} <= {self.default_page_size})"
) )
response = self.client.get(self.global_endpoint, {"page": 2}) response = self.client.get(self.global_endpoint, {"page": 2, "review_status": True})
self.assertEqual(response.status_code, 200) self.assertEqual(response.status_code, 200)
payload = response.json() payload = response.json()

View File

@ -0,0 +1,301 @@
"""
Tests for Scenario Creation Endpoint Error Handling.
Tests comprehensive error handling for POST /api/v1/package/{uuid}/scenario/
including package not found, permission denied, validation errors, and database errors.
"""
from django.test import TestCase, tag
import json
from uuid import uuid4
from epdb.logic import PackageManager, UserManager
from epdb.models import Scenario
@tag("api", "scenario_creation")
class ScenarioCreationAPITests(TestCase):
"""Test scenario creation endpoint error handling."""
@classmethod
def setUpTestData(cls):
"""Set up test data: users and packages."""
cls.user = UserManager.create_user(
"scenario-test-user",
"scenario-test@envipath.com",
"SuperSafe",
set_setting=False,
add_to_group=False,
is_active=True,
)
cls.other_user = UserManager.create_user(
"other-user",
"other@envipath.com",
"SuperSafe",
set_setting=False,
add_to_group=False,
is_active=True,
)
cls.package = PackageManager.create_package(
cls.user, "Test Package", "Test package for scenario creation"
)
def test_create_scenario_package_not_found(self):
"""Test that non-existent package UUID returns 404."""
self.client.force_login(self.user)
fake_uuid = uuid4()
payload = {
"name": "Test Scenario",
"description": "Test description",
"scenario_date": "2024-01-01",
"scenario_type": "biodegradation",
"additional_information": [],
}
response = self.client.post(
f"/api/v1/package/{fake_uuid}/scenario/",
data=json.dumps(payload),
content_type="application/json",
)
self.assertEqual(response.status_code, 404)
self.assertIn(f"Package with UUID {fake_uuid} not found", response.json()["detail"])
def test_create_scenario_insufficient_permissions(self):
"""Test that unauthorized access returns 403."""
self.client.force_login(self.other_user)
payload = {
"name": "Test Scenario",
"description": "Test description",
"scenario_date": "2024-01-01",
"scenario_type": "biodegradation",
"additional_information": [],
}
response = self.client.post(
f"/api/v1/package/{self.package.uuid}/scenario/",
data=json.dumps(payload),
content_type="application/json",
)
self.assertEqual(response.status_code, 403)
self.assertIn("permission", response.json()["detail"].lower())
def test_create_scenario_invalid_ai_type(self):
"""Test that unknown additional information type returns 400."""
self.client.force_login(self.user)
payload = {
"name": "Test Scenario",
"description": "Test description",
"scenario_date": "2024-01-01",
"scenario_type": "biodegradation",
"additional_information": [
{"type": "invalid_type_that_does_not_exist", "data": {"some_field": "some_value"}}
],
}
response = self.client.post(
f"/api/v1/package/{self.package.uuid}/scenario/",
data=json.dumps(payload),
content_type="application/json",
)
self.assertEqual(response.status_code, 400)
response_data = response.json()
self.assertIn("Validation errors", response_data["detail"])
def test_create_scenario_validation_error(self):
"""Test that invalid additional information data returns 400."""
self.client.force_login(self.user)
# Use malformed data structure for an actual AI type
payload = {
"name": "Test Scenario",
"description": "Test description",
"scenario_date": "2024-01-01",
"scenario_type": "biodegradation",
"additional_information": [
{
"type": "invalid_type_name",
"data": None, # This should cause a validation error
}
],
}
response = self.client.post(
f"/api/v1/package/{self.package.uuid}/scenario/",
data=json.dumps(payload),
content_type="application/json",
)
# Should return 422 for validation errors
self.assertEqual(response.status_code, 422)
def test_create_scenario_success(self):
"""Test that valid scenario creation returns 200."""
self.client.force_login(self.user)
payload = {
"name": "Test Scenario",
"description": "Test description",
"scenario_date": "2024-01-01",
"scenario_type": "biodegradation",
"additional_information": [],
}
response = self.client.post(
f"/api/v1/package/{self.package.uuid}/scenario/",
data=json.dumps(payload),
content_type="application/json",
)
self.assertEqual(response.status_code, 200)
data = response.json()
self.assertEqual(data["name"], "Test Scenario")
self.assertEqual(data["description"], "Test description")
# Verify scenario was actually created
scenario = Scenario.objects.get(name="Test Scenario")
self.assertEqual(scenario.package, self.package)
self.assertEqual(scenario.scenario_type, "biodegradation")
def test_create_scenario_auto_name(self):
"""Test that empty name triggers auto-generation."""
self.client.force_login(self.user)
payload = {
"name": "", # Empty name should be auto-generated
"description": "Test description",
"scenario_date": "2024-01-01",
"scenario_type": "biodegradation",
"additional_information": [],
}
response = self.client.post(
f"/api/v1/package/{self.package.uuid}/scenario/",
data=json.dumps(payload),
content_type="application/json",
)
self.assertEqual(response.status_code, 200)
data = response.json()
# Auto-generated name should follow pattern "Scenario N"
self.assertTrue(data["name"].startswith("Scenario "))
def test_create_scenario_xss_protection(self):
"""Test that XSS attempts are sanitized."""
self.client.force_login(self.user)
payload = {
"name": "<script>alert('xss')</script>Clean Name",
"description": "<img src=x onerror=alert('xss')>Description",
"scenario_date": "2024-01-01",
"scenario_type": "biodegradation",
"additional_information": [],
}
response = self.client.post(
f"/api/v1/package/{self.package.uuid}/scenario/",
data=json.dumps(payload),
content_type="application/json",
)
self.assertEqual(response.status_code, 200)
data = response.json()
# XSS should be cleaned out
self.assertNotIn("<script>", data["name"])
self.assertNotIn("onerror", data["description"])
def test_create_scenario_missing_required_field(self):
"""Test that missing required fields returns validation error."""
self.client.force_login(self.user)
# Missing 'name' field entirely
payload = {
"description": "Test description",
"scenario_date": "2024-01-01",
"scenario_type": "biodegradation",
"additional_information": [],
}
response = self.client.post(
f"/api/v1/package/{self.package.uuid}/scenario/",
data=json.dumps(payload),
content_type="application/json",
)
# Should return 422 for schema validation errors
self.assertEqual(response.status_code, 422)
def test_create_scenario_type_error_in_ai(self):
"""Test that TypeError in AI instantiation returns 400."""
self.client.force_login(self.user)
payload = {
"name": "Test Scenario",
"description": "Test description",
"scenario_date": "2024-01-01",
"scenario_type": "biodegradation",
"additional_information": [
{
"type": "invalid_type_name",
"data": "string instead of dict", # Wrong type
}
],
}
response = self.client.post(
f"/api/v1/package/{self.package.uuid}/scenario/",
data=json.dumps(payload),
content_type="application/json",
)
# Should return 422 for validation errors
self.assertEqual(response.status_code, 422)
def test_create_scenario_default_values(self):
"""Test that default values are applied correctly."""
self.client.force_login(self.user)
# Minimal payload with only name
payload = {"name": "Minimal Scenario"}
response = self.client.post(
f"/api/v1/package/{self.package.uuid}/scenario/",
data=json.dumps(payload),
content_type="application/json",
)
self.assertEqual(response.status_code, 200)
data = response.json()
self.assertEqual(data["name"], "Minimal Scenario")
# Check defaults are applied
scenario = Scenario.objects.get(name="Minimal Scenario")
# Default description from model is "no description"
self.assertIn(scenario.description.lower(), ["", "no description"])
def test_create_scenario_unicode_characters(self):
"""Test that unicode characters are handled properly."""
self.client.force_login(self.user)
payload = {
"name": "Test Scenario 测试 🧪",
"description": "Description with émojis and spëcial çhars",
"scenario_date": "2024-01-01",
"scenario_type": "biodegradation",
"additional_information": [],
}
response = self.client.post(
f"/api/v1/package/{self.package.uuid}/scenario/",
data=json.dumps(payload),
content_type="application/json",
)
self.assertEqual(response.status_code, 200)
data = response.json()
self.assertIn("测试", data["name"])
self.assertIn("émojis", data["description"])

View File

@ -0,0 +1,114 @@
"""
Property-based tests for schema generation.
Tests that verify schema generation works correctly for all models,
regardless of their structure.
"""
import pytest
from typing import Type
from pydantic import BaseModel
from envipy_additional_information import registry, EnviPyModel
from epapi.utils.schema_transformers import build_rjsf_output
class TestSchemaGeneration:
"""Test that all models can generate valid RJSF schemas."""
@pytest.mark.parametrize("model_name,model_cls", list(registry.list_models().items()))
def test_all_models_generate_rjsf(self, model_name: str, model_cls: Type[BaseModel]):
"""Every model in the registry should generate valid RJSF format."""
# Skip non-EnviPyModel classes (parsers, etc.)
if not issubclass(model_cls, EnviPyModel):
pytest.skip(f"{model_name} is not an EnviPyModel")
# Should not raise exception
result = build_rjsf_output(model_cls)
# Verify structure
assert isinstance(result, dict), f"{model_name}: Result should be a dict"
assert "schema" in result, f"{model_name}: Missing 'schema' key"
assert "uiSchema" in result, f"{model_name}: Missing 'uiSchema' key"
assert "formData" in result, f"{model_name}: Missing 'formData' key"
assert "groups" in result, f"{model_name}: Missing 'groups' key"
# Verify types
assert isinstance(result["schema"], dict), f"{model_name}: schema should be dict"
assert isinstance(result["uiSchema"], dict), f"{model_name}: uiSchema should be dict"
assert isinstance(result["formData"], dict), f"{model_name}: formData should be dict"
assert isinstance(result["groups"], list), f"{model_name}: groups should be list"
# Verify schema has properties
assert "properties" in result["schema"], f"{model_name}: schema should have 'properties'"
assert isinstance(result["schema"]["properties"], dict), (
f"{model_name}: properties should be dict"
)
@pytest.mark.parametrize("model_name,model_cls", list(registry.list_models().items()))
def test_ui_schema_matches_schema_fields(self, model_name: str, model_cls: Type[BaseModel]):
"""uiSchema keys should match schema properties (or be nested for intervals)."""
if not issubclass(model_cls, EnviPyModel):
pytest.skip(f"{model_name} is not an EnviPyModel")
result = build_rjsf_output(model_cls)
schema_props = set(result["schema"]["properties"].keys())
ui_schema_keys = set(result["uiSchema"].keys())
# uiSchema should have entries for all top-level properties
# (intervals may have nested start/end, but the main field should be present)
assert ui_schema_keys.issubset(schema_props), (
f"{model_name}: uiSchema has keys not in schema: {ui_schema_keys - schema_props}"
)
@pytest.mark.parametrize("model_name,model_cls", list(registry.list_models().items()))
def test_groups_is_list_of_strings(self, model_name: str, model_cls: Type[BaseModel]):
"""Groups should be a list of strings."""
if not issubclass(model_cls, EnviPyModel):
pytest.skip(f"{model_name} is not an EnviPyModel")
result = build_rjsf_output(model_cls)
groups = result["groups"]
assert isinstance(groups, list), f"{model_name}: groups should be list"
assert all(isinstance(g, str) for g in groups), (
f"{model_name}: all groups should be strings, got {groups}"
)
assert len(groups) > 0, f"{model_name}: should have at least one group"
@pytest.mark.parametrize("model_name,model_cls", list(registry.list_models().items()))
def test_form_data_matches_schema(self, model_name: str, model_cls: Type[BaseModel]):
"""formData keys should match schema properties."""
if not issubclass(model_cls, EnviPyModel):
pytest.skip(f"{model_name} is not an EnviPyModel")
result = build_rjsf_output(model_cls)
schema_props = set(result["schema"]["properties"].keys())
form_data_keys = set(result["formData"].keys())
# formData should only contain keys that are in schema
assert form_data_keys.issubset(schema_props), (
f"{model_name}: formData has keys not in schema: {form_data_keys - schema_props}"
)
class TestWidgetTypes:
"""Test that widget types are valid."""
@pytest.mark.parametrize("model_name,model_cls", list(registry.list_models().items()))
def test_widget_types_are_valid(self, model_name: str, model_cls: Type[BaseModel]):
"""All widget types in uiSchema should be valid WidgetType values."""
from envipy_additional_information.ui_config import WidgetType
if not issubclass(model_cls, EnviPyModel):
pytest.skip(f"{model_name} is not an EnviPyModel")
result = build_rjsf_output(model_cls)
valid_widgets = {wt.value for wt in WidgetType}
for field_name, ui_config in result["uiSchema"].items():
widget = ui_config.get("ui:widget")
if widget:
assert widget in valid_widgets, (
f"{model_name}.{field_name}: Invalid widget '{widget}'. Valid: {valid_widgets}"
)

View File

@ -0,0 +1,94 @@
from datetime import timedelta
from django.test import TestCase, tag
from django.utils import timezone
from epdb.logic import PackageManager, UserManager
from epdb.models import APIToken
@tag("api", "auth")
class BearerTokenAuthTests(TestCase):
@classmethod
def setUpTestData(cls):
cls.user = UserManager.create_user(
"token-user",
"token-user@envipath.com",
"SuperSafe",
set_setting=False,
add_to_group=False,
is_active=True,
)
default_pkg = cls.user.default_package
cls.user.default_package = None
cls.user.save()
if default_pkg:
default_pkg.delete()
cls.unreviewed_package = PackageManager.create_package(
cls.user, "Token Auth Package", "Package for token auth tests"
)
def _auth_header(self, raw_token):
return {"HTTP_AUTHORIZATION": f"Bearer {raw_token}"}
def test_valid_token_allows_access(self):
_, raw_token = APIToken.create_token(self.user, name="Valid Token", expires_days=1)
response = self.client.get("/api/v1/compounds/", **self._auth_header(raw_token))
self.assertEqual(response.status_code, 200)
def test_expired_token_rejected(self):
token, raw_token = APIToken.create_token(self.user, name="Expired Token", expires_days=1)
token.expires_at = timezone.now() - timedelta(days=1)
token.save(update_fields=["expires_at"])
response = self.client.get("/api/v1/compounds/", **self._auth_header(raw_token))
self.assertEqual(response.status_code, 401)
def test_inactive_token_rejected(self):
token, raw_token = APIToken.create_token(self.user, name="Inactive Token", expires_days=1)
token.is_active = False
token.save(update_fields=["is_active"])
response = self.client.get("/api/v1/compounds/", **self._auth_header(raw_token))
self.assertEqual(response.status_code, 401)
def test_invalid_token_rejected(self):
response = self.client.get("/api/v1/compounds/", HTTP_AUTHORIZATION="Bearer invalid-token")
self.assertEqual(response.status_code, 401)
def test_no_token_rejected(self):
self.client.logout()
response = self.client.get("/api/v1/compounds/")
self.assertEqual(response.status_code, 401)
def test_bearer_populates_request_user_for_packages(self):
response = self.client.get("/api/v1/packages/")
self.assertEqual(response.status_code, 200)
payload = response.json()
uuids = {item["uuid"] for item in payload["items"]}
self.assertNotIn(str(self.unreviewed_package.uuid), uuids)
_, raw_token = APIToken.create_token(self.user, name="Package Token", expires_days=1)
response = self.client.get("/api/v1/packages/", **self._auth_header(raw_token))
self.assertEqual(response.status_code, 200)
payload = response.json()
uuids = {item["uuid"] for item in payload["items"]}
self.assertIn(str(self.unreviewed_package.uuid), uuids)
def test_session_auth_still_works_without_bearer(self):
self.client.force_login(self.user)
response = self.client.get("/api/v1/packages/")
self.assertEqual(response.status_code, 200)
payload = response.json()
uuids = {item["uuid"] for item in payload["items"]}
self.assertIn(str(self.unreviewed_package.uuid), uuids)

0
epapi/utils/__init__.py Normal file
View File

View File

@ -0,0 +1,181 @@
"""
Schema transformation utilities for converting Pydantic models to RJSF format.
This module provides functions to extract UI configuration from Pydantic models
and transform them into React JSON Schema Form (RJSF) compatible format.
"""
from typing import Type, Optional, Any
import jsonref
from pydantic import BaseModel
from envipy_additional_information.ui_config import UIConfig
from envipy_additional_information import registry
def extract_groups(model_cls: Type[BaseModel]) -> list[str]:
"""
Extract groups from registry-stored group information.
Args:
model_cls: The model class
Returns:
List of group names the model belongs to
"""
return registry.get_groups(model_cls)
def extract_ui_metadata(model_cls: Type[BaseModel]) -> dict[str, Any]:
"""
Extract model-level UI metadata from UI class.
Returns metadata attributes that are NOT UIConfig instances.
Common metadata includes: unit, description, title.
"""
metadata: dict[str, Any] = {}
if not hasattr(model_cls, "UI"):
return metadata
ui_class = getattr(model_cls, "UI")
# Iterate over all attributes in the UI class
for attr_name in dir(ui_class):
# Skip private attributes
if attr_name.startswith("_"):
continue
# Get the attribute value
try:
attr_value = getattr(ui_class, attr_name)
except AttributeError:
continue
# Skip callables but keep types/classes
if callable(attr_value) and not isinstance(attr_value, type):
continue
# Skip UIConfig instances (these are field-level configs, not metadata)
# This includes both UIConfig and IntervalConfig
if isinstance(attr_value, UIConfig):
continue
metadata[attr_name] = attr_value
return metadata
def extract_ui_config_from_model(model_cls: Type[BaseModel]) -> dict[str, Any]:
"""
Extract UI configuration from model's UI class.
Returns a dictionary mapping field names to their UI schema configurations.
Trusts the config classes to handle their own transformation logic.
"""
ui_configs: dict[str, Any] = {}
if not hasattr(model_cls, "UI"):
return ui_configs
ui_class = getattr(model_cls, "UI")
schema = model_cls.model_json_schema()
field_names = schema.get("properties", {}).keys()
# Extract config for each field
for field_name in field_names:
# Skip if UI config doesn't exist for this field (field may be hidden from UI)
if not hasattr(ui_class, field_name):
continue
ui_config = getattr(ui_class, field_name)
if isinstance(ui_config, UIConfig):
ui_configs[field_name] = ui_config.to_ui_schema_field()
return ui_configs
def build_ui_schema(model_cls: Type[BaseModel]) -> dict:
"""Generate RJSF uiSchema from model's UI class."""
ui_schema = {}
# Extract field-level UI configs
field_configs = extract_ui_config_from_model(model_cls)
for field_name, config in field_configs.items():
ui_schema[field_name] = config
return ui_schema
def build_schema(model_cls: Type[BaseModel]) -> dict[str, Any]:
"""
Build JSON schema from Pydantic model, applying UI metadata.
Dereferences all $ref pointers to produce fully inlined schema.
This ensures the frontend receives schemas with enum values and nested
properties fully resolved, without needing client-side ref resolution.
Extracts model-level metadata from UI class (title, unit, etc.) and applies
it to the generated schema. This ensures UI metadata is the single source of truth.
"""
schema = model_cls.model_json_schema()
# Dereference $ref pointers (inlines $defs) using jsonref
# This ensures the frontend receives schemas with enum values and nested
# properties fully resolved, currently necessary for client-side rendering.
# FIXME: This is a hack to get the schema to work with alpine schema-form.js replace once we migrate to client-side framework.
schema = jsonref.replace_refs(schema, proxies=False)
# Remove $defs section as all refs are now inlined
if "$defs" in schema:
del schema["$defs"]
# Extract and apply UI metadata (title, unit, description, etc.)
ui_metadata = extract_ui_metadata(model_cls)
# Apply all metadata consistently as custom properties with x- prefix
# This ensures consistency and avoids conflicts with standard JSON Schema properties
for key, value in ui_metadata.items():
if value is not None:
schema[f"x-{key}"] = value
# Set standard title property from UI metadata for JSON Schema compliance
if "title" in ui_metadata:
schema["title"] = ui_metadata["title"]
elif "label" in ui_metadata:
schema["title"] = ui_metadata["label"]
return schema
def build_rjsf_output(model_cls: Type[BaseModel], initial_data: Optional[dict] = None) -> dict:
"""
Main function that returns complete RJSF format.
Trusts the config classes to handle their own transformation logic.
No special-case handling - if a config knows how to transform itself, it will.
Returns:
dict with keys: schema, uiSchema, formData, groups
"""
# Build schema with UI metadata applied
schema = build_schema(model_cls)
# Build UI schema - config classes handle their own transformation
ui_schema = build_ui_schema(model_cls)
# Extract groups from marker interfaces
groups = extract_groups(model_cls)
# Use provided initial_data or empty dict
form_data = initial_data if initial_data is not None else {}
return {
"schema": schema,
"uiSchema": ui_schema,
"formData": form_data,
"groups": groups,
}

View File

@ -0,0 +1,82 @@
"""Shared utilities for handling Pydantic validation errors."""
import json
from pydantic import ValidationError
from pydantic_core import ErrorDetails
from ninja.errors import HttpError
def format_validation_error(error: ErrorDetails) -> str:
"""Format a Pydantic validation error into a user-friendly message.
Args:
error: A Pydantic error details dictionary containing 'msg', 'type', 'ctx', etc.
Returns:
A user-friendly error message string.
"""
msg = error.get("msg") or "Invalid value"
error_type = error.get("type") or ""
# Handle common validation types with friendly messages
if error_type == "enum":
ctx = error.get("ctx", {})
expected = ctx.get("expected", "") if ctx else ""
return f"Please select a valid option{': ' + expected if expected else ''}"
elif error_type == "literal_error":
# Literal errors (like Literal["active", "inactive"])
return msg.replace("Input should be ", "Please enter ")
elif error_type == "missing":
return "This field is required"
elif error_type == "string_type":
return "Please enter a valid string"
elif error_type == "int_type":
return "Please enter a valid int"
elif error_type == "int_parsing":
return "Please enter a valid int"
elif error_type == "float_type":
return "Please enter a valid float"
elif error_type == "float_parsing":
return "Please enter a valid float"
elif error_type == "value_error":
# Strip "Value error, " prefix from custom validator messages
return msg.replace("Value error, ", "")
else:
# Default: use the message from Pydantic but clean it up
return msg.replace("Input should be ", "Please enter ").replace("Value error, ", "")
def handle_validation_error(e: ValidationError) -> None:
"""Convert a Pydantic ValidationError into a structured HttpError.
This function transforms Pydantic validation errors into a JSON structure
that the frontend expects for displaying field-level errors.
Args:
e: The Pydantic ValidationError to handle.
Raises:
HttpError: Always raises a 400 error with structured JSON containing
type, field_errors, and message fields.
"""
# Transform Pydantic validation errors into user-friendly format
field_errors: dict[str, list[str]] = {}
for error in e.errors():
# Get the field name from location tuple
loc = error.get("loc", ())
field = str(loc[-1]) if loc else "root"
# Format the error message
friendly_msg = format_validation_error(error)
if field not in field_errors:
field_errors[field] = []
field_errors[field].append(friendly_msg)
# Return structured error for frontend parsing
error_response = {
"type": "validation_error",
"field_errors": field_errors,
"message": "Please correct the errors below",
}
raise HttpError(400, json.dumps(error_response))

View File

@ -1,8 +1,34 @@
import hashlib
from ninja.security import HttpBearer from ninja.security import HttpBearer
from ninja.errors import HttpError from ninja.errors import HttpError
from epdb.models import APIToken
class BearerTokenAuth(HttpBearer): class BearerTokenAuth(HttpBearer):
def authenticate(self, request, token): def authenticate(self, request, token):
# FIXME: placeholder; implement it in O(1) time if token is None:
raise HttpError(401, "Invalid or expired token") return None
hashed_token = hashlib.sha256(token.encode()).hexdigest()
user = APIToken.authenticate(hashed_token, hashed=True)
if not user:
raise HttpError(401, "Invalid or expired token")
request.user = user
return user
class OptionalBearerTokenAuth:
"""Bearer auth that allows unauthenticated access.
Validates the Bearer token if present (401 on invalid token),
otherwise lets the request through for anonymous/session access.
"""
def __init__(self):
self._bearer = BearerTokenAuth()
def __call__(self, request):
return self._bearer(request) or request.user

View File

@ -1,12 +1,12 @@
from django.db.models import Model from django.db.models import Model
from epdb.logic import PackageManager from epdb.logic import PackageManager
from epdb.models import CompoundStructure, User, Package, Compound from epdb.models import CompoundStructure, User, Package, Compound, Scenario
from uuid import UUID from uuid import UUID
from .errors import EPAPINotFoundError, EPAPIPermissionDeniedError from .errors import EPAPINotFoundError, EPAPIPermissionDeniedError
def get_compound_or_error(user, compound_uuid: UUID): def get_compound_for_read(user, compound_uuid: UUID):
""" """
Get compound by UUID with permission check. Get compound by UUID with permission check.
""" """
@ -23,7 +23,7 @@ def get_compound_or_error(user, compound_uuid: UUID):
return compound return compound
def get_package_or_error(user, package_uuid: UUID): def get_package_for_read(user, package_uuid: UUID):
""" """
Get package by UUID with permission check. Get package by UUID with permission check.
""" """
@ -41,14 +41,58 @@ def get_package_or_error(user, package_uuid: UUID):
return package return package
def get_user_packages_qs(user: User | None): def get_package_for_write(user, package_uuid: UUID):
"""
Get package by UUID with permission check.
"""
# FIXME: update package manager with custom exceptions to avoid manual checks here
try:
package = Package.objects.get(uuid=package_uuid)
except Package.DoesNotExist:
raise EPAPINotFoundError(f"Package with UUID {package_uuid} not found")
# FIXME: optimize package manager to exclusively work with UUIDs
if not user or user.is_anonymous or not PackageManager.writable(user, package):
raise EPAPIPermissionDeniedError("Insufficient permissions to access this package.")
return package
def get_scenario_for_read(user, scenario_uuid: UUID):
"""Get scenario by UUID with read permission check."""
try:
scenario = Scenario.objects.select_related("package").get(uuid=scenario_uuid)
except Scenario.DoesNotExist:
raise EPAPINotFoundError(f"Scenario with UUID {scenario_uuid} not found")
if not user or user.is_anonymous or not PackageManager.readable(user, scenario.package):
raise EPAPIPermissionDeniedError("Insufficient permissions to access this scenario.")
return scenario
def get_scenario_for_write(user, scenario_uuid: UUID):
"""Get scenario by UUID with write permission check."""
try:
scenario = Scenario.objects.select_related("package").get(uuid=scenario_uuid)
except Scenario.DoesNotExist:
raise EPAPINotFoundError(f"Scenario with UUID {scenario_uuid} not found")
if not user or user.is_anonymous or not PackageManager.writable(user, scenario.package):
raise EPAPIPermissionDeniedError("Insufficient permissions to modify this scenario.")
return scenario
def get_user_packages_for_read(user: User | None):
"""Get all packages readable by the user.""" """Get all packages readable by the user."""
if not user or user.is_anonymous: if not user or user.is_anonymous:
return PackageManager.get_reviewed_packages() return PackageManager.get_reviewed_packages()
return PackageManager.get_all_readable_packages(user, include_reviewed=True) return PackageManager.get_all_readable_packages(user, include_reviewed=True)
def get_user_entities_qs(model_class: Model, user: User | None): def get_user_entities_for_read(model_class: Model, user: User | None):
"""Build queryset for reviewed package entities.""" """Build queryset for reviewed package entities."""
if not user or user.is_anonymous: if not user or user.is_anonymous:
@ -60,16 +104,14 @@ def get_user_entities_qs(model_class: Model, user: User | None):
return qs return qs
def get_package_scoped_entities_qs( def get_package_entities_for_read(model_class: Model, package_uuid: UUID, user: User | None = None):
model_class: Model, package_uuid: UUID, user: User | None = None
):
"""Build queryset for specific package entities.""" """Build queryset for specific package entities."""
package = get_package_or_error(user, package_uuid) package = get_package_for_read(user, package_uuid)
qs = model_class.objects.filter(package=package).select_related("package") qs = model_class.objects.filter(package=package).select_related("package")
return qs return qs
def get_user_structures_qs(user: User | None): def get_user_structure_for_read(user: User | None):
"""Build queryset for structures accessible to the user (via compound->package).""" """Build queryset for structures accessible to the user (via compound->package)."""
if not user or user.is_anonymous: if not user or user.is_anonymous:
@ -83,13 +125,13 @@ def get_user_structures_qs(user: User | None):
return qs return qs
def get_package_compound_scoped_structure_qs( def get_package_compound_structure_for_read(
package_uuid: UUID, compound_uuid: UUID, user: User | None = None package_uuid: UUID, compound_uuid: UUID, user: User | None = None
): ):
"""Build queryset for specific package compound structures.""" """Build queryset for specific package compound structures."""
get_package_or_error(user, package_uuid) get_package_for_read(user, package_uuid)
compound = get_compound_or_error(user, compound_uuid) compound = get_compound_for_read(user, compound_uuid)
qs = CompoundStructure.objects.filter(compound=compound).select_related("compound__package") qs = CompoundStructure.objects.filter(compound=compound).select_related("compound__package")
return qs return qs

View File

@ -0,0 +1,174 @@
from ninja import Router, Body
from ninja.errors import HttpError
from uuid import UUID
from pydantic import ValidationError
from typing import Dict, Any
import logging
from envipy_additional_information import registry
from envipy_additional_information.groups import GroupEnum
from epapi.utils.schema_transformers import build_rjsf_output
from epapi.utils.validation_errors import handle_validation_error
from epdb.models import AdditionalInformation
from ..dal import get_scenario_for_read, get_scenario_for_write
logger = logging.getLogger(__name__)
router = Router(tags=["Additional Information"])
@router.get("/information/schema/")
def list_all_schemas(request):
"""Return all schemas in RJSF format with lowercase class names as keys."""
result = {}
for name, cls in registry.list_models().items():
try:
result[name] = build_rjsf_output(cls)
except Exception as e:
logger.warning(f"Failed to generate schema for {name}: {e}")
continue
return result
@router.get("/information/schema/{model_name}/")
def get_model_schema(request, model_name: str):
"""Return RJSF schema for specific model."""
cls = registry.get_model(model_name.lower())
if not cls:
raise HttpError(404, f"Unknown model: {model_name}")
return build_rjsf_output(cls)
@router.get("/scenario/{uuid:scenario_uuid}/information/")
def list_scenario_info(request, scenario_uuid: UUID):
"""List all additional information for a scenario"""
scenario = get_scenario_for_read(request.user, scenario_uuid)
result = []
for ai in AdditionalInformation.objects.filter(scenario=scenario):
result.append(
{
"type": ai.get().__class__.__name__,
"uuid": getattr(ai, "uuid", None),
"data": ai.data,
"attach_object": ai.content_object.simple_json() if ai.content_object else None,
}
)
return result
@router.post("/scenario/{uuid:scenario_uuid}/information/{model_name}/")
def add_scenario_info(
request, scenario_uuid: UUID, model_name: str, payload: Dict[str, Any] = Body(...)
):
"""Add new additional information to scenario"""
cls = registry.get_model(model_name.lower())
if not cls:
raise HttpError(404, f"Unknown model: {model_name}")
try:
instance = cls(**payload) # Pydantic validates
except ValidationError as e:
handle_validation_error(e)
scenario = get_scenario_for_write(request.user, scenario_uuid)
# Model method now returns the UUID
created_uuid = scenario.add_additional_information(instance)
return {"status": "created", "uuid": created_uuid}
@router.patch("/scenario/{uuid:scenario_uuid}/information/item/{uuid:ai_uuid}/")
def update_scenario_info(
request, scenario_uuid: UUID, ai_uuid: UUID, payload: Dict[str, Any] = Body(...)
):
"""Update existing additional information for a scenario"""
scenario = get_scenario_for_write(request.user, scenario_uuid)
ai_uuid_str = str(ai_uuid)
ai = AdditionalInformation.objects.filter(uuid=ai_uuid_str, scenario=scenario)
if not ai.exists():
raise HttpError(404, f"Additional information with UUID {ai_uuid} not found")
ai = ai.first()
# Get the model class for validation
cls = registry.get_model(ai.type.lower())
if not cls:
raise HttpError(500, f"Unknown model type in data: {ai.type}")
# Validate the payload against the model
try:
instance = cls(**payload)
except ValidationError as e:
handle_validation_error(e)
# Use model method for update
try:
scenario.update_additional_information(ai_uuid_str, instance)
except ValueError as e:
raise HttpError(404, str(e))
return {"status": "updated", "uuid": ai_uuid_str}
@router.delete("/scenario/{uuid:scenario_uuid}/information/item/{uuid:ai_uuid}/")
def delete_scenario_info(request, scenario_uuid: UUID, ai_uuid: UUID):
"""Delete additional information from scenario"""
scenario = get_scenario_for_write(request.user, scenario_uuid)
try:
scenario.remove_additional_information(str(ai_uuid))
except ValueError as e:
raise HttpError(404, str(e))
return {"status": "deleted"}
@router.get("/information/groups/")
def list_groups(request):
"""Return list of available group names."""
return {"groups": GroupEnum.values()}
@router.get("/information/groups/{group_name}/")
def get_group_models(request, group_name: str):
"""
Return models for a specific group organized by subcategory.
Args:
group_name: One of "sludge", "soil", or "sediment" (string)
Returns:
Dictionary with subcategories (exp, spike, comp, misc, or group name)
as keys and lists of model info as values
"""
# Convert string to enum (raises ValueError if invalid)
try:
group_enum = GroupEnum(group_name)
except ValueError:
valid = ", ".join(GroupEnum.values())
raise HttpError(400, f"Invalid group '{group_name}'. Valid: {valid}")
try:
group_data = registry.collect_group(group_enum)
except (ValueError, TypeError) as e:
raise HttpError(400, str(e))
result = {}
for subcategory, models in group_data.items():
result[subcategory] = [
{
"name": cls.__name__.lower(),
"class": cls.__name__,
"title": getattr(cls.UI, "title", cls.__name__)
if hasattr(cls, "UI")
else cls.__name__,
}
for cls in models
]
return result

View File

@ -6,7 +6,7 @@ from uuid import UUID
from epdb.models import Compound from epdb.models import Compound
from ..pagination import EnhancedPageNumberPagination from ..pagination import EnhancedPageNumberPagination
from ..schemas import CompoundOutSchema, ReviewStatusFilter from ..schemas import CompoundOutSchema, ReviewStatusFilter
from ..dal import get_user_entities_qs, get_package_scoped_entities_qs from ..dal import get_user_entities_for_read, get_package_entities_for_read
router = Router() router = Router()
@ -21,7 +21,7 @@ def list_all_compounds(request):
""" """
List all compounds from reviewed packages. List all compounds from reviewed packages.
""" """
return get_user_entities_qs(Compound, request.user).order_by("name").all() return get_user_entities_for_read(Compound, request.user).order_by("name").all()
@router.get( @router.get(
@ -38,4 +38,4 @@ def list_package_compounds(request, package_uuid: UUID):
List all compounds for a specific package. List all compounds for a specific package.
""" """
user = request.user user = request.user
return get_package_scoped_entities_qs(Compound, package_uuid, user).order_by("name").all() return get_package_entities_for_read(Compound, package_uuid, user).order_by("name").all()

View File

@ -6,7 +6,7 @@ from uuid import UUID
from epdb.models import EPModel from epdb.models import EPModel
from ..pagination import EnhancedPageNumberPagination from ..pagination import EnhancedPageNumberPagination
from ..schemas import ModelOutSchema, ReviewStatusFilter from ..schemas import ModelOutSchema, ReviewStatusFilter
from ..dal import get_user_entities_qs, get_package_scoped_entities_qs from ..dal import get_user_entities_for_read, get_package_entities_for_read
router = Router() router = Router()
@ -21,7 +21,7 @@ def list_all_models(request):
""" """
List all models from reviewed packages. List all models from reviewed packages.
""" """
return get_user_entities_qs(EPModel, request.user).order_by("name").all() return get_user_entities_for_read(EPModel, request.user).order_by("name").all()
@router.get( @router.get(
@ -38,4 +38,4 @@ def list_package_models(request, package_uuid: UUID):
List all models for a specific package. List all models for a specific package.
""" """
user = request.user user = request.user
return get_package_scoped_entities_qs(EPModel, package_uuid, user).order_by("name").all() return get_package_entities_for_read(EPModel, package_uuid, user).order_by("name").all()

View File

@ -3,7 +3,8 @@ from ninja import Router
from ninja_extra.pagination import paginate from ninja_extra.pagination import paginate
import logging import logging
from ..dal import get_user_packages_qs from ..auth import OptionalBearerTokenAuth
from ..dal import get_user_packages_for_read
from ..pagination import EnhancedPageNumberPagination from ..pagination import EnhancedPageNumberPagination
from ..schemas import PackageOutSchema, SelfReviewStatusFilter from ..schemas import PackageOutSchema, SelfReviewStatusFilter
@ -11,7 +12,11 @@ router = Router()
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@router.get("/packages/", response=EnhancedPageNumberPagination.Output[PackageOutSchema], auth=None) @router.get(
"/packages/",
response=EnhancedPageNumberPagination.Output[PackageOutSchema],
auth=OptionalBearerTokenAuth(),
)
@paginate( @paginate(
EnhancedPageNumberPagination, EnhancedPageNumberPagination,
page_size=s.API_PAGINATION_DEFAULT_PAGE_SIZE, page_size=s.API_PAGINATION_DEFAULT_PAGE_SIZE,
@ -23,5 +28,5 @@ def list_all_packages(request):
""" """
user = request.user user = request.user
qs = get_user_packages_qs(user) qs = get_user_packages_for_read(user)
return qs.order_by("name").all() return qs.order_by("name").all()

View File

@ -6,7 +6,7 @@ from uuid import UUID
from epdb.models import Pathway from epdb.models import Pathway
from ..pagination import EnhancedPageNumberPagination from ..pagination import EnhancedPageNumberPagination
from ..schemas import PathwayOutSchema, ReviewStatusFilter from ..schemas import PathwayOutSchema, ReviewStatusFilter
from ..dal import get_user_entities_qs, get_package_scoped_entities_qs from ..dal import get_user_entities_for_read, get_package_entities_for_read
router = Router() router = Router()
@ -22,7 +22,7 @@ def list_all_pathways(request):
List all pathways from reviewed packages. List all pathways from reviewed packages.
""" """
user = request.user user = request.user
return get_user_entities_qs(Pathway, user).order_by("name").all() return get_user_entities_for_read(Pathway, user).order_by("name").all()
@router.get( @router.get(
@ -39,4 +39,4 @@ def list_package_pathways(request, package_uuid: UUID):
List all pathways for a specific package. List all pathways for a specific package.
""" """
user = request.user user = request.user
return get_package_scoped_entities_qs(Pathway, package_uuid, user).order_by("name").all() return get_package_entities_for_read(Pathway, package_uuid, user).order_by("name").all()

View File

@ -6,7 +6,7 @@ from uuid import UUID
from epdb.models import Reaction from epdb.models import Reaction
from ..pagination import EnhancedPageNumberPagination from ..pagination import EnhancedPageNumberPagination
from ..schemas import ReactionOutSchema, ReviewStatusFilter from ..schemas import ReactionOutSchema, ReviewStatusFilter
from ..dal import get_user_entities_qs, get_package_scoped_entities_qs from ..dal import get_user_entities_for_read, get_package_entities_for_read
router = Router() router = Router()
@ -22,7 +22,7 @@ def list_all_reactions(request):
List all reactions from reviewed packages. List all reactions from reviewed packages.
""" """
user = request.user user = request.user
return get_user_entities_qs(Reaction, user).order_by("name").all() return get_user_entities_for_read(Reaction, user).order_by("name").all()
@router.get( @router.get(
@ -39,4 +39,4 @@ def list_package_reactions(request, package_uuid: UUID):
List all reactions for a specific package. List all reactions for a specific package.
""" """
user = request.user user = request.user
return get_package_scoped_entities_qs(Reaction, package_uuid, user).order_by("name").all() return get_package_entities_for_read(Reaction, package_uuid, user).order_by("name").all()

View File

@ -6,7 +6,7 @@ from uuid import UUID
from epdb.models import Rule from epdb.models import Rule
from ..pagination import EnhancedPageNumberPagination from ..pagination import EnhancedPageNumberPagination
from ..schemas import ReviewStatusFilter, RuleOutSchema from ..schemas import ReviewStatusFilter, RuleOutSchema
from ..dal import get_user_entities_qs, get_package_scoped_entities_qs from ..dal import get_user_entities_for_read, get_package_entities_for_read
router = Router() router = Router()
@ -22,7 +22,7 @@ def list_all_rules(request):
List all rules from reviewed packages. List all rules from reviewed packages.
""" """
user = request.user user = request.user
return get_user_entities_qs(Rule, user).order_by("name").all() return get_user_entities_for_read(Rule, user).order_by("name").all()
@router.get( @router.get(
@ -39,4 +39,4 @@ def list_package_rules(request, package_uuid: UUID):
List all rules for a specific package. List all rules for a specific package.
""" """
user = request.user user = request.user
return get_package_scoped_entities_qs(Rule, package_uuid, user).order_by("name").all() return get_package_entities_for_read(Rule, package_uuid, user).order_by("name").all()

View File

@ -1,12 +1,25 @@
from django.conf import settings as s from django.conf import settings as s
from ninja import Router from django.db import IntegrityError, OperationalError, DatabaseError
from ninja import Router, Body
from ninja.errors import HttpError
from ninja_extra.pagination import paginate from ninja_extra.pagination import paginate
from uuid import UUID from uuid import UUID
from pydantic import ValidationError
import logging
import json
from epdb.models import Scenario from epdb.models import Scenario
from epdb.views import _anonymous_or_real
from ..pagination import EnhancedPageNumberPagination from ..pagination import EnhancedPageNumberPagination
from ..schemas import ReviewStatusFilter, ScenarioOutSchema from ..schemas import (
from ..dal import get_user_entities_qs, get_package_scoped_entities_qs ReviewStatusFilter,
ScenarioOutSchema,
ScenarioCreateSchema,
)
from ..dal import get_user_entities_for_read, get_package_entities_for_read, get_package_for_write
from envipy_additional_information import registry
logger = logging.getLogger(__name__)
router = Router() router = Router()
@ -19,7 +32,8 @@ router = Router()
) )
def list_all_scenarios(request): def list_all_scenarios(request):
user = request.user user = request.user
return get_user_entities_qs(Scenario, user).order_by("name").all() items = get_user_entities_for_read(Scenario, user)
return items.order_by("name").all()
@router.get( @router.get(
@ -33,4 +47,83 @@ def list_all_scenarios(request):
) )
def list_package_scenarios(request, package_uuid: UUID): def list_package_scenarios(request, package_uuid: UUID):
user = request.user user = request.user
return get_package_scoped_entities_qs(Scenario, package_uuid, user).order_by("name").all() items = get_package_entities_for_read(Scenario, package_uuid, user)
return items.order_by("name").all()
@router.post("/package/{uuid:package_uuid}/scenario/", response=ScenarioOutSchema)
def create_scenario(request, package_uuid: UUID, payload: ScenarioCreateSchema = Body(...)):
"""Create a new scenario with optional additional information."""
user = _anonymous_or_real(request)
try:
current_package = get_package_for_write(user, package_uuid)
except ValueError as e:
error_msg = str(e)
if "does not exist" in error_msg:
raise HttpError(404, f"Package not found: {package_uuid}")
elif "Insufficient permissions" in error_msg:
raise HttpError(403, "You do not have permission to access this package")
else:
logger.error(f"Unexpected ValueError from get_package_by_id: {error_msg}")
raise HttpError(400, "Invalid package request")
# Build additional information models from payload
additional_information_models = []
validation_errors = []
for ai_item in payload.additional_information:
# Get model class from registry
model_cls = registry.get_model(ai_item.type.lower())
if not model_cls:
validation_errors.append(f"Unknown additional information type: {ai_item.type}")
continue
try:
# Validate and create model instance
instance = model_cls(**ai_item.data)
additional_information_models.append(instance)
except ValidationError as e:
# Collect validation errors to return to user
error_messages = [err.get("msg", "Validation error") for err in e.errors()]
validation_errors.append(f"{ai_item.type}: {', '.join(error_messages)}")
except (TypeError, AttributeError, KeyError) as e:
logger.warning(f"Failed to instantiate {ai_item.type} model: {str(e)}")
validation_errors.append(f"{ai_item.type}: Invalid data structure - {str(e)}")
except Exception as e:
logger.error(f"Unexpected error instantiating {ai_item.type}: {str(e)}")
validation_errors.append(f"{ai_item.type}: Failed to process - please check your data")
# If there are validation errors, return them
if validation_errors:
raise HttpError(
400,
json.dumps(
{
"error": "Validation errors in additional information",
"details": validation_errors,
}
),
)
# Create scenario using the existing Scenario.create method
try:
new_scenario = Scenario.create(
package=current_package,
name=payload.name,
description=payload.description,
scenario_date=payload.scenario_date,
scenario_type=payload.scenario_type,
additional_information=additional_information_models,
)
except IntegrityError as e:
logger.error(f"Database integrity error creating scenario: {str(e)}")
raise HttpError(400, "Scenario creation failed - data constraint violation")
except OperationalError as e:
logger.error(f"Database operational error creating scenario: {str(e)}")
raise HttpError(503, "Database temporarily unavailable - please try again")
except (DatabaseError, AttributeError) as e:
logger.error(f"Error creating scenario: {str(e)}")
raise HttpError(500, "Failed to create scenario due to database error")
return new_scenario

View File

@ -0,0 +1,23 @@
from django.conf import settings as s
from ninja import Router
from ninja_extra.pagination import paginate
from epdb.logic import SettingManager
from ..pagination import EnhancedPageNumberPagination
from ..schemas import SettingOutSchema
router = Router()
@router.get("/settings/", response=EnhancedPageNumberPagination.Output[SettingOutSchema])
@paginate(
EnhancedPageNumberPagination,
page_size=s.API_PAGINATION_DEFAULT_PAGE_SIZE,
)
def list_all_pathways(request):
"""
List all pathways from reviewed packages.
"""
user = request.user
return SettingManager.get_all_settings(user)

View File

@ -6,8 +6,8 @@ from uuid import UUID
from ..pagination import EnhancedPageNumberPagination from ..pagination import EnhancedPageNumberPagination
from ..schemas import CompoundStructureOutSchema, StructureReviewStatusFilter from ..schemas import CompoundStructureOutSchema, StructureReviewStatusFilter
from ..dal import ( from ..dal import (
get_user_structures_qs, get_user_structure_for_read,
get_package_compound_scoped_structure_qs, get_package_compound_structure_for_read,
) )
router = Router() router = Router()
@ -26,7 +26,7 @@ def list_all_structures(request):
List all structures from all packages. List all structures from all packages.
""" """
user = request.user user = request.user
return get_user_structures_qs(user).order_by("name").all() return get_user_structure_for_read(user).order_by("name").all()
@router.get( @router.get(
@ -44,7 +44,7 @@ def list_package_structures(request, package_uuid: UUID, compound_uuid: UUID):
""" """
user = request.user user = request.user
return ( return (
get_package_compound_scoped_structure_qs(package_uuid, compound_uuid, user) get_package_compound_structure_for_read(package_uuid, compound_uuid, user)
.order_by("name") .order_by("name")
.all() .all()
) )

View File

@ -1,7 +1,19 @@
from ninja import Router from ninja import Router
from ninja.security import SessionAuth from ninja.security import SessionAuth
from .auth import BearerTokenAuth from .auth import BearerTokenAuth
from .endpoints import packages, scenarios, compounds, rules, reactions, pathways, models, structure from .endpoints import (
packages,
scenarios,
compounds,
rules,
reactions,
pathways,
models,
structure,
additional_information,
settings,
)
# Main router with authentication # Main router with authentication
router = Router( router = Router(
@ -20,3 +32,5 @@ router.add_router("", reactions.router)
router.add_router("", pathways.router) router.add_router("", pathways.router)
router.add_router("", models.router) router.add_router("", models.router)
router.add_router("", structure.router) router.add_router("", structure.router)
router.add_router("", additional_information.router)
router.add_router("", settings.router)

View File

@ -1,5 +1,5 @@
from ninja import FilterSchema, FilterLookup, Schema from ninja import FilterSchema, FilterLookup, Schema
from typing import Annotated, Optional from typing import Annotated, Optional, List, Dict, Any
from uuid import UUID from uuid import UUID
@ -51,6 +51,23 @@ class ScenarioOutSchema(PackageEntityOutSchema):
pass pass
class AdditionalInformationItemSchema(Schema):
"""Schema for additional information item in scenario creation."""
type: str
data: Dict[str, Any]
class ScenarioCreateSchema(Schema):
"""Schema for creating a new scenario."""
name: str
description: str = ""
scenario_date: str = "No date"
scenario_type: str = "Not specified"
additional_information: List[AdditionalInformationItemSchema] = []
class CompoundOutSchema(PackageEntityOutSchema): class CompoundOutSchema(PackageEntityOutSchema):
pass pass
@ -102,3 +119,10 @@ class PackageOutSchema(Schema):
@staticmethod @staticmethod
def resolve_review_status(obj): def resolve_review_status(obj):
return "reviewed" if obj.reviewed else "unreviewed" return "reviewed" if obj.reviewed else "unreviewed"
class SettingOutSchema(Schema):
uuid: UUID
url: str = ""
name: str
description: str

View File

@ -2,6 +2,7 @@ from django.conf import settings as s
from django.contrib import admin from django.contrib import admin
from .models import ( from .models import (
AdditionalInformation,
Compound, Compound,
CompoundStructure, CompoundStructure,
Edge, Edge,
@ -16,6 +17,7 @@ from .models import (
Node, Node,
ParallelRule, ParallelRule,
Pathway, Pathway,
PropertyPluginModel,
Reaction, Reaction,
Scenario, Scenario,
Setting, Setting,
@ -27,8 +29,20 @@ from .models import (
Package = s.GET_PACKAGE_MODEL() Package = s.GET_PACKAGE_MODEL()
class AdditionalInformationAdmin(admin.ModelAdmin):
pass
class UserAdmin(admin.ModelAdmin): class UserAdmin(admin.ModelAdmin):
list_display = ["username", "email", "is_active"] list_display = [
"username",
"email",
"is_active",
"is_staff",
"is_superuser",
"last_login",
"date_joined",
]
class UserPackagePermissionAdmin(admin.ModelAdmin): class UserPackagePermissionAdmin(admin.ModelAdmin):
@ -48,7 +62,7 @@ class JobLogAdmin(admin.ModelAdmin):
class EPAdmin(admin.ModelAdmin): class EPAdmin(admin.ModelAdmin):
search_fields = ["name", "description"] search_fields = ["name", "description", "url", "uuid"]
list_display = ["name", "url", "created"] list_display = ["name", "url", "created"]
ordering = ["-created"] ordering = ["-created"]
@ -65,6 +79,10 @@ class EnviFormerAdmin(EPAdmin):
pass pass
class PropertyPluginModelAdmin(admin.ModelAdmin):
pass
class LicenseAdmin(admin.ModelAdmin): class LicenseAdmin(admin.ModelAdmin):
list_display = ["cc_string", "link", "image_link"] list_display = ["cc_string", "link", "image_link"]
@ -117,6 +135,7 @@ class ExternalIdentifierAdmin(admin.ModelAdmin):
pass pass
admin.site.register(AdditionalInformation, AdditionalInformationAdmin)
admin.site.register(User, UserAdmin) admin.site.register(User, UserAdmin)
admin.site.register(UserPackagePermission, UserPackagePermissionAdmin) admin.site.register(UserPackagePermission, UserPackagePermissionAdmin)
admin.site.register(Group, GroupAdmin) admin.site.register(Group, GroupAdmin)
@ -125,6 +144,7 @@ admin.site.register(JobLog, JobLogAdmin)
admin.site.register(Package, PackageAdmin) admin.site.register(Package, PackageAdmin)
admin.site.register(MLRelativeReasoning, MLRelativeReasoningAdmin) admin.site.register(MLRelativeReasoning, MLRelativeReasoningAdmin)
admin.site.register(EnviFormer, EnviFormerAdmin) admin.site.register(EnviFormer, EnviFormerAdmin)
admin.site.register(PropertyPluginModel, PropertyPluginModelAdmin)
admin.site.register(License, LicenseAdmin) admin.site.register(License, LicenseAdmin)
admin.site.register(Compound, CompoundAdmin) admin.site.register(Compound, CompoundAdmin)
admin.site.register(CompoundStructure, CompoundStructureAdmin) admin.site.register(CompoundStructure, CompoundStructureAdmin)

View File

@ -2,20 +2,12 @@ from typing import List
from django.contrib.auth import get_user_model from django.contrib.auth import get_user_model
from ninja import Router, Schema, Field from ninja import Router, Schema, Field
from ninja.errors import HttpError
from ninja.pagination import paginate from ninja.pagination import paginate
from ninja.security import HttpBearer
from epapi.v1.auth import BearerTokenAuth
from .logic import PackageManager from .logic import PackageManager
from .models import User, Compound, APIToken from .models import User, Compound
class BearerTokenAuth(HttpBearer):
def authenticate(self, request, token):
for token_obj in APIToken.objects.select_related("user").all():
if token_obj.check_token(token) and token_obj.is_valid():
return token_obj.user
raise HttpError(401, "Invalid or expired token")
def _anonymous_or_real(request): def _anonymous_or_real(request):

View File

@ -15,3 +15,13 @@ class EPDBConfig(AppConfig):
model_name = getattr(settings, "EPDB_PACKAGE_MODEL", "epdb.Package") model_name = getattr(settings, "EPDB_PACKAGE_MODEL", "epdb.Package")
logger.info(f"Using Package model: {model_name}") logger.info(f"Using Package model: {model_name}")
from .autodiscovery import autodiscover
autodiscover()
if settings.PLUGINS_ENABLED:
from bridge.contracts import Property
from utilities.plugin import discover_plugins
settings.PROPERTY_PLUGINS.update(**discover_plugins(_cls=Property))

5
epdb/autodiscovery.py Normal file
View File

@ -0,0 +1,5 @@
from django.utils.module_loading import autodiscover_modules
def autodiscover():
autodiscover_modules("epdb_hooks")

View File

@ -1,36 +1,58 @@
from collections import defaultdict
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
import nh3 import nh3
from django.conf import settings as s from django.conf import settings as s
from django.contrib.auth import get_user_model from django.contrib.auth import get_user_model
from django.http import HttpResponse from django.http import HttpResponse, JsonResponse
from django.shortcuts import redirect from django.shortcuts import redirect
from ninja import Field, Form, Router, Schema, Query from ninja import Field, Form, Query, Router, Schema
from ninja.security import SessionAuth from ninja.security import SessionAuth
from utilities.chem import FormatConverter from utilities.chem import FormatConverter
from utilities.misc import PackageExporter from utilities.misc import PackageExporter
from .logic import GroupManager, PackageManager, SettingManager, UserManager, SearchManager from .logic import (
EPDBURLParser,
GroupManager,
PackageManager,
SearchManager,
SettingManager,
UserManager,
)
from .models import ( from .models import (
AdditionalInformation,
Compound, Compound,
CompoundStructure, CompoundStructure,
Edge, Edge,
EnviFormer,
EPModel, EPModel,
Group,
GroupPackagePermission,
MLRelativeReasoning,
Node, Node,
PackageBasedModel,
ParallelRule,
Pathway, Pathway,
Reaction, Reaction,
Rule, Rule,
RuleBasedRelativeReasoning,
Scenario, Scenario,
SimpleAmbitRule, SimpleAmbitRule,
User, User,
UserPackagePermission, UserPackagePermission,
ParallelRule,
) )
Package = s.GET_PACKAGE_MODEL() Package = s.GET_PACKAGE_MODEL()
def get_package_for_write(user, package_uuid):
p = PackageManager.get_package_by_id(user, package_uuid)
if not PackageManager.writable(user, p):
raise ValueError("You do not have the rights to write to this Package!")
return p
def _anonymous_or_real(request): def _anonymous_or_real(request):
if request.user.is_authenticated and not request.user.is_anonymous: if request.user.is_authenticated and not request.user.is_anonymous:
return request.user return request.user
@ -81,6 +103,8 @@ class SimpleObject(Schema):
return "reviewed" if obj.compound.package.reviewed else "unreviewed" return "reviewed" if obj.compound.package.reviewed else "unreviewed"
elif isinstance(obj, Node) or isinstance(obj, Edge): elif isinstance(obj, Node) or isinstance(obj, Edge):
return "reviewed" if obj.pathway.package.reviewed else "unreviewed" return "reviewed" if obj.pathway.package.reviewed else "unreviewed"
elif isinstance(obj, dict) and "review_status" in obj:
return "reviewed" if obj.get("review_status") else "unreviewed"
else: else:
raise ValueError("Object has no package") raise ValueError("Object has no package")
@ -200,6 +224,82 @@ def get_user(request, user_uuid):
} }
########
# Group #
########
class GroupMember(Schema):
id: str
identifier: str
name: str
class GroupWrapper(Schema):
group: List[SimpleGroup]
class GroupSchema(Schema):
description: str
id: str = Field(None, alias="url")
identifier: str = "group"
members: List[GroupMember] = Field([], alias="members")
name: str = Field(None, alias="name")
ownerid: str = Field(None, alias="owner.url")
ownername: str = Field(None, alias="owner.get_name")
packages: List["SimplePackage"] = Field([], alias="packages")
readers: List[GroupMember] = Field([], alias="readers")
writers: List[GroupMember] = Field([], alias="writers")
@staticmethod
def resolve_members(obj: Group):
res = []
for member in obj.user_member.all():
res.append(GroupMember(id=member.url, identifier="usermember", name=member.get_name()))
for member in obj.group_member.all():
res.append(GroupMember(id=member.url, identifier="groupmember", name=member.get_name()))
return res
@staticmethod
def resolve_packages(obj: Group):
return Package.objects.filter(
id__in=[
GroupPackagePermission.objects.filter(group=obj).values_list(
"package_id", flat=True
)
]
)
@staticmethod
def resolve_readers(obj: Group):
return GroupSchema.resolve_members(obj)
@staticmethod
def resolve_writers(obj: Group):
return [GroupMember(id=obj.owner.url, identifier="usermember", name=obj.owner.username)]
@router.get("/group", response={200: GroupWrapper, 403: Error})
def get_groups(request):
return {"group": GroupManager.get_groups(request.user)}
@router.get("/group/{uuid:group_uuid}", response={200: GroupSchema, 403: Error})
def get_group(request, group_uuid):
try:
g = GroupManager.get_group_by_id(request.user, group_uuid)
return g
except ValueError:
return 403, {
"message": f"Getting Group with id {group_uuid} failed due to insufficient rights!"
}
##########
# Search #
##########
class Search(Schema): class Search(Schema):
packages: List[str] = Field(alias="packages[]") packages: List[str] = Field(alias="packages[]")
search: str search: str
@ -237,11 +337,11 @@ def search(request, search: Query[Search]):
if "Compound Structures" in search_res: if "Compound Structures" in search_res:
res["structure"] = search_res["Compound Structures"] res["structure"] = search_res["Compound Structures"]
if "Reaction" in search_res: if "Reactions" in search_res:
res["reaction"] = search_res["Reaction"] res["reaction"] = search_res["Reactions"]
if "Pathway" in search_res: if "Pathways" in search_res:
res["pathway"] = search_res["Pathway"] res["pathway"] = search_res["Pathways"]
if "Rules" in search_res: if "Rules" in search_res:
res["rule"] = search_res["Rules"] res["rule"] = search_res["Rules"]
@ -292,7 +392,7 @@ class PackageSchema(Schema):
).values_list("user", flat=True) ).values_list("user", flat=True)
).distinct() ).distinct()
return [{u.id: u.name} for u in users] return [{u.id: u.get_name()} for u in users]
@staticmethod @staticmethod
def resolve_writers(obj: Package): def resolve_writers(obj: Package):
@ -302,7 +402,7 @@ class PackageSchema(Schema):
).values_list("user", flat=True) ).values_list("user", flat=True)
).distinct() ).distinct()
return [{u.id: u.name} for u in users] return [{u.id: u.get_name()} for u in users]
@staticmethod @staticmethod
def resolve_review_comment(obj): def resolve_review_comment(obj):
@ -373,7 +473,7 @@ class UpdatePackage(Schema):
@router.post("/package/{uuid:package_uuid}", response={200: PackageSchema | Any, 400: Error}) @router.post("/package/{uuid:package_uuid}", response={200: PackageSchema | Any, 400: Error})
def update_package(request, package_uuid, pack: Form[UpdatePackage]): def update_package(request, package_uuid, pack: Form[UpdatePackage]):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
if pack.hiddenMethod: if pack.hiddenMethod:
if pack.hiddenMethod == "DELETE": if pack.hiddenMethod == "DELETE":
@ -469,21 +569,42 @@ class CompoundSchema(Schema):
@staticmethod @staticmethod
def resolve_halflifes(obj: Compound): def resolve_halflifes(obj: Compound):
return [] res = []
for scen, hls in obj.half_lifes().items():
for hl in hls:
res.append(
{
"hl": str(hl.dt50),
"hlComment": hl.comment,
"hlFit": hl.fit,
"hlModel": hl.model,
"scenarioId": scen.url,
"scenarioName": scen.name,
"scenarioType": scen.scenario_type,
"source": hl.source,
}
)
return res
@staticmethod @staticmethod
def resolve_pubchem_compound_references(obj: Compound): def resolve_pubchem_compound_references(obj: Compound):
# TODO
return [] return []
@staticmethod @staticmethod
def resolve_pathway_scenarios(obj: Compound): def resolve_pathway_scenarios(obj: Compound):
return [ res = []
{ for pw in obj.related_pathways:
"scenarioId": "https://envipath.org/package/5882df9c-dae1-4d80-a40e-db4724271456/scenario/cd8350cd-4249-4111-ba9f-4e2209338501", for scen in pw.scenarios.all():
"scenarioName": "Fritz, R. & Brauner, A. (1989) - (00004)", res.append(
"scenarioType": "Soil", {
} "scenarioId": scen.url,
] "scenarioName": scen.name,
"scenarioType": scen.scenario_type,
}
)
return res
class CompoundStructureSchema(Schema): class CompoundStructureSchema(Schema):
@ -536,7 +657,22 @@ class CompoundStructureSchema(Schema):
@staticmethod @staticmethod
def resolve_halflifes(obj: CompoundStructure): def resolve_halflifes(obj: CompoundStructure):
return [] res = []
for scen, hls in obj.half_lifes().items():
for hl in hls:
res.append(
{
"hl": str(hl.dt50),
"hlComment": hl.comment,
"hlFit": hl.fit,
"hlModel": hl.model,
"scenarioId": scen.url,
"scenarioName": scen.name,
"scenarioType": scen.scenario_type,
"source": hl.source,
}
)
return res
@staticmethod @staticmethod
def resolve_pubchem_compound_references(obj: CompoundStructure): def resolve_pubchem_compound_references(obj: CompoundStructure):
@ -544,13 +680,18 @@ class CompoundStructureSchema(Schema):
@staticmethod @staticmethod
def resolve_pathway_scenarios(obj: CompoundStructure): def resolve_pathway_scenarios(obj: CompoundStructure):
return [ res = []
{ for pw in obj.related_pathways:
"scenarioId": "https://envipath.org/package/5882df9c-dae1-4d80-a40e-db4724271456/scenario/cd8350cd-4249-4111-ba9f-4e2209338501", for scen in pw.scenarios.all():
"scenarioName": "Fritz, R. & Brauner, A. (1989) - (00004)", res.append(
"scenarioType": "Soil", {
} "scenarioId": scen.url,
] "scenarioName": scen.name,
"scenarioType": scen.scenario_type,
}
)
return res
class CompoundStructureWrapper(Schema): class CompoundStructureWrapper(Schema):
@ -635,7 +776,7 @@ def create_package_compound(
c: Form[CreateCompound], c: Form[CreateCompound],
): ):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
# inchi is not used atm # inchi is not used atm
c = Compound.create( c = Compound.create(
p, c.compoundSmiles, c.compoundName, c.compoundDescription, inchi=c.inchi p, c.compoundSmiles, c.compoundName, c.compoundDescription, inchi=c.inchi
@ -648,14 +789,10 @@ def create_package_compound(
@router.delete("/package/{uuid:package_uuid}/compound/{uuid:compound_uuid}") @router.delete("/package/{uuid:package_uuid}/compound/{uuid:compound_uuid}")
def delete_compound(request, package_uuid, compound_uuid): def delete_compound(request, package_uuid, compound_uuid):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
c = Compound.objects.get(package=p, uuid=compound_uuid)
if PackageManager.writable(request.user, p): c.delete()
c = Compound.objects.get(package=p, uuid=compound_uuid) return redirect(f"{p.url}/compound")
c.delete()
return redirect(f"{p.url}/compound")
else:
raise ValueError("You do not have the rights to delete this Compound!")
except ValueError: except ValueError:
return 403, { return 403, {
"message": f"Deleting Compound with id {compound_uuid} failed due to insufficient rights!" "message": f"Deleting Compound with id {compound_uuid} failed due to insufficient rights!"
@ -667,31 +804,29 @@ def delete_compound(request, package_uuid, compound_uuid):
) )
def delete_compound_structure(request, package_uuid, compound_uuid, structure_uuid): def delete_compound_structure(request, package_uuid, compound_uuid, structure_uuid):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
if PackageManager.writable(request.user, p): c = Compound.objects.get(package=p, uuid=compound_uuid)
c = Compound.objects.get(package=p, uuid=compound_uuid) cs = CompoundStructure.objects.get(compound=c, uuid=structure_uuid)
cs = CompoundStructure.objects.get(compound=c, uuid=structure_uuid)
# Check if we have to delete the compound as no structure is left # Check if we have to delete the compound as no structure is left
if len(cs.compound.structures.all()) == 1: if len(cs.compound.structures.all()) == 1:
# This will delete the structure as well # This will delete the structure as well
c.delete()
return redirect(p.url + "/compound")
else:
if cs.normalized_structure:
c.delete() c.delete()
return redirect(p.url + "/compound") return redirect(p.url + "/compound")
else: else:
if cs.normalized_structure: if c.default_structure == cs:
c.delete() cs.delete()
return redirect(p.url + "/compound") c.default_structure = c.structures.all().first()
return redirect(c.url + "/structure")
else: else:
if c.default_structure == cs: cs.delete()
cs.delete() return redirect(c.url + "/structure")
c.default_structure = c.structures.all().first()
return redirect(c.url + "/structure")
else:
cs.delete()
return redirect(c.url + "/structure")
else:
raise ValueError("You do not have the rights to delete this CompoundStructure!")
except ValueError: except ValueError:
return 403, { return 403, {
"message": f"Deleting CompoundStructure with id {compound_uuid} failed due to insufficient rights!" "message": f"Deleting CompoundStructure with id {compound_uuid} failed due to insufficient rights!"
@ -878,13 +1013,18 @@ def create_package_simple_rule(
r: Form[CreateSimpleRule], r: Form[CreateSimpleRule],
): ):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
if r.rdkitrule and r.rdkitrule.strip() == "true": if r.rdkitrule and r.rdkitrule.strip() == "true":
raise ValueError("Not yet implemented!") raise ValueError("Not yet implemented!")
else: else:
sr = SimpleAmbitRule.create( sr = SimpleAmbitRule.create(
p, r.name, r.description, r.smirks, r.reactantFilterSmarts, r.productFilterSmarts p,
r.name,
r.description,
r.smirks,
r.reactantFilterSmarts,
r.productFilterSmarts,
) )
return redirect(sr.url) return redirect(sr.url)
@ -909,7 +1049,7 @@ def create_package_parallel_rule(
r: Form[CreateParallelRule], r: Form[CreateParallelRule],
): ):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
srs = SimpleRule.objects.filter(package=p, url__in=r.simpleRules) srs = SimpleRule.objects.filter(package=p, url__in=r.simpleRules)
@ -953,7 +1093,7 @@ def post_package_parallel_rule(request, package_uuid, rule_uuid, compound: Form[
def _post_package_rule(request, package_uuid, rule_uuid, compound: Form[str]): def _post_package_rule(request, package_uuid, rule_uuid, compound: Form[str]):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
r = Rule.objects.get(package=p, uuid=rule_uuid) r = Rule.objects.get(package=p, uuid=rule_uuid)
if compound is not None: if compound is not None:
@ -998,14 +1138,11 @@ def delete_parallel_rule(request, package_uuid, rule_uuid):
def _delete_rule(request, package_uuid, rule_uuid): def _delete_rule(request, package_uuid, rule_uuid):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
r = Rule.objects.get(package=p, uuid=rule_uuid)
r.delete()
return redirect(f"{p.url}/rule")
if PackageManager.writable(request.user, p):
r = Rule.objects.get(package=p, uuid=rule_uuid)
r.delete()
return redirect(f"{p.url}/rule")
else:
raise ValueError("You do not have the rights to delete this Rule!")
except ValueError: except ValueError:
return 403, { return 403, {
"message": f"Deleting Rule with id {rule_uuid} failed due to insufficient rights!" "message": f"Deleting Rule with id {rule_uuid} failed due to insufficient rights!"
@ -1037,7 +1174,7 @@ class ReactionSchema(Schema):
name: str = Field(None, alias="name") name: str = Field(None, alias="name")
pathways: List["SimplePathway"] = Field([], alias="related_pathways") pathways: List["SimplePathway"] = Field([], alias="related_pathways")
products: List["ReactionCompoundStructure"] = Field([], alias="products") products: List["ReactionCompoundStructure"] = Field([], alias="products")
references: List[Dict[str, List[str]]] = Field([], alias="references") references: Dict[str, List[str]] = Field({}, alias="references")
reviewStatus: str = Field(None, alias="review_status") reviewStatus: str = Field(None, alias="review_status")
scenarios: List["SimpleScenario"] = Field([], alias="scenarios") scenarios: List["SimpleScenario"] = Field([], alias="scenarios")
smirks: str = Field("", alias="smirks") smirks: str = Field("", alias="smirks")
@ -1053,8 +1190,12 @@ class ReactionSchema(Schema):
@staticmethod @staticmethod
def resolve_references(obj: Reaction): def resolve_references(obj: Reaction):
# TODO rhea_refs = []
return [] for rhea in obj.get_rhea_identifiers():
rhea_refs.append(f"{rhea.identifier_value}")
# TODO UniProt
return {"rheaReferences": rhea_refs, "uniprotCount": []}
@staticmethod @staticmethod
def resolve_medline_references(obj: Reaction): def resolve_medline_references(obj: Reaction):
@ -1116,7 +1257,7 @@ def create_package_reaction(
r: Form[CreateReaction], r: Form[CreateReaction],
): ):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
if r.smirks is None and (r.educt is None or r.product is None): if r.smirks is None and (r.educt is None or r.product is None):
raise ValueError("Either SMIRKS or educt/product must be provided") raise ValueError("Either SMIRKS or educt/product must be provided")
@ -1162,14 +1303,11 @@ def create_package_reaction(
@router.delete("/package/{uuid:package_uuid}/reaction/{uuid:reaction_uuid}") @router.delete("/package/{uuid:package_uuid}/reaction/{uuid:reaction_uuid}")
def delete_reaction(request, package_uuid, reaction_uuid): def delete_reaction(request, package_uuid, reaction_uuid):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
if PackageManager.writable(request.user, p): r = Reaction.objects.get(package=p, uuid=reaction_uuid)
r = Reaction.objects.get(package=p, uuid=reaction_uuid) r.delete()
r.delete() return redirect(f"{p.url}/reaction")
return redirect(f"{p.url}/reaction")
else:
raise ValueError("You do not have the rights to delete this Reaction!")
except ValueError: except ValueError:
return 403, { return 403, {
"message": f"Deleting Reaction with id {reaction_uuid} failed due to insufficient rights!" "message": f"Deleting Reaction with id {reaction_uuid} failed due to insufficient rights!"
@ -1200,7 +1338,14 @@ class ScenarioSchema(Schema):
@staticmethod @staticmethod
def resolve_collection(obj: Scenario): def resolve_collection(obj: Scenario):
return obj.additional_information res = defaultdict(list)
for ai in obj.get_additional_information(direct_only=False):
data = ai.data
data["related"] = ai.content_object.simple_json() if ai.content_object else None
res[ai.type].append(data)
return res
@staticmethod @staticmethod
def resolve_review_status(obj: Rule): def resolve_review_status(obj: Rule):
@ -1241,17 +1386,57 @@ def get_package_scenario(request, package_uuid, scenario_uuid):
} }
@router.delete("/package/{uuid:package_uuid}/scenario") @router.post("/package/{uuid:package_uuid}/scenario", response={200: str | Any, 403: Error})
def delete_scenarios(request, package_uuid, scenario_uuid): def create_package_scenario(request, package_uuid):
try: from utilities.legacy import build_additional_information_from_request
p = PackageManager.get_package_by_id(request.user, package_uuid)
try:
p = get_package_for_write(request.user, package_uuid)
scen_date = None
date_year = request.POST.get("dateYear")
date_month = request.POST.get("dateMonth")
date_day = request.POST.get("dateDay")
if date_year:
scen_date = date_year
if date_month:
scen_date += f"-{date_month}"
if date_day:
scen_date += f"-{date_day}"
name = request.POST.get("studyname")
description = request.POST.get("studydescription")
study_type = request.POST.get("type")
ais = []
types = request.POST.get("adInfoTypes[]", [])
if types:
types = types.split(",")
for t in types:
ais.append(build_additional_information_from_request(request, t))
new_s = Scenario.create(p, name, description, scen_date, study_type, ais)
return JsonResponse({"scenarioLocation": new_s.url})
except ValueError:
return 403, {
"message": f"Getting Package with id {package_uuid} failed due to insufficient rights!"
}
@router.delete("/package/{uuid:package_uuid}/scenario")
def delete_scenarios(request, package_uuid):
try:
p = get_package_for_write(request.user, package_uuid)
scens = Scenario.objects.filter(package=p)
scens.delete()
return redirect(f"{p.url}/scenario")
if PackageManager.writable(request.user, p):
scens = Scenario.objects.filter(package=p)
scens.delete()
return redirect(f"{p.url}/scenario")
else:
raise ValueError("You do not have the rights to delete Scenarios!")
except ValueError: except ValueError:
return 403, {"message": "Deleting Scenarios failed due to insufficient rights!"} return 403, {"message": "Deleting Scenarios failed due to insufficient rights!"}
@ -1259,20 +1444,61 @@ def delete_scenarios(request, package_uuid, scenario_uuid):
@router.delete("/package/{uuid:package_uuid}/scenario/{uuid:scenario_uuid}") @router.delete("/package/{uuid:package_uuid}/scenario/{uuid:scenario_uuid}")
def delete_scenario(request, package_uuid, scenario_uuid): def delete_scenario(request, package_uuid, scenario_uuid):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
scen = Scenario.objects.get(package=p, uuid=scenario_uuid)
scen.delete()
return redirect(f"{p.url}/scenario")
if PackageManager.writable(request.user, p):
scen = Scenario.objects.get(package=p, uuid=scenario_uuid)
scen.delete()
return redirect(f"{p.url}/scenario")
else:
raise ValueError("You do not have the rights to delete this Scenario!")
except ValueError: except ValueError:
return 403, { return 403, {
"message": f"Deleting Scenario with id {scenario_uuid} failed due to insufficient rights!" "message": f"Deleting Scenario with id {scenario_uuid} failed due to insufficient rights!"
} }
@router.post(
"/package/{uuid:package_uuid}/additional-information", response={200: str | Any, 403: Error}
)
def create_package_additional_information(request, package_uuid):
from utilities.legacy import build_additional_information_from_request
try:
p = get_package_for_write(request.user, package_uuid)
scen = request.POST.get("scenario")
scenario = Scenario.objects.get(package=p, url=scen)
url_parser = EPDBURLParser(request.POST.get("attach_obj"))
attach_obj = url_parser.get_object()
if not hasattr(attach_obj, "additional_information"):
raise ValueError("Can't attach additional information to this object!")
if not attach_obj.url.startswith(p.url):
raise ValueError(
"Additional Information can only be set to objects stored in the same package!"
)
types = request.POST.get("adInfoTypes[]", "").split(",")
for t in types:
ai = build_additional_information_from_request(request, t)
AdditionalInformation.create(
p,
ai,
scenario=scenario,
content_object=attach_obj,
)
# TODO implement additional information endpoint ?
return redirect(f"{scenario.url}")
except ValueError:
return 403, {
"message": f"Getting Package with id {package_uuid} failed due to insufficient rights!"
}
########### ###########
# Pathway # # Pathway #
########### ###########
@ -1289,8 +1515,8 @@ class PathwayEdge(Schema):
pseudo: bool = False pseudo: bool = False
rule: Optional[str] = Field(None, alias="rule") rule: Optional[str] = Field(None, alias="rule")
scenarios: List[SimpleScenario] = Field([], alias="scenarios") scenarios: List[SimpleScenario] = Field([], alias="scenarios")
source: int = -1 source: int = Field(-1)
target: int = -1 target: int = Field(-1)
@staticmethod @staticmethod
def resolve_rule(obj: Edge): def resolve_rule(obj: Edge):
@ -1303,7 +1529,7 @@ class PathwayEdge(Schema):
class PathwayNode(Schema): class PathwayNode(Schema):
atomCount: int = Field(None, alias="atom_count") atomCount: int = Field(None, alias="atom_count")
depth: int = Field(None, alias="depth") depth: float = Field(None, alias="depth")
dt50s: List[Dict[str, str]] = Field([], alias="dt50s") dt50s: List[Dict[str, str]] = Field([], alias="dt50s")
engineeredIntermediate: bool = Field(None, alias="engineered_intermediate") engineeredIntermediate: bool = Field(None, alias="engineered_intermediate")
id: str = Field(None, alias="url") id: str = Field(None, alias="url")
@ -1353,9 +1579,9 @@ class PathwaySchema(Schema):
isIncremental: bool = Field(None, alias="is_incremental") isIncremental: bool = Field(None, alias="is_incremental")
isPredicted: bool = Field(None, alias="is_predicted") isPredicted: bool = Field(None, alias="is_predicted")
lastModified: int = Field(None, alias="last_modified") lastModified: int = Field(None, alias="last_modified")
links: List[PathwayEdge] = Field([], alias="edges") links: List[PathwayEdge] = Field([])
name: str = Field(None, alias="name") name: str = Field(None, alias="name")
nodes: List[PathwayNode] = Field([], alias="nodes") nodes: List[PathwayNode] = Field([])
pathwayName: str = Field(None, alias="name") pathwayName: str = Field(None, alias="name")
reviewStatus: str = Field(None, alias="review_status") reviewStatus: str = Field(None, alias="review_status")
scenarios: List["SimpleScenario"] = Field([], alias="scenarios") scenarios: List["SimpleScenario"] = Field([], alias="scenarios")
@ -1377,6 +1603,14 @@ class PathwaySchema(Schema):
def resolve_last_modified(obj: Pathway): def resolve_last_modified(obj: Pathway):
return int(obj.modified.timestamp()) return int(obj.modified.timestamp())
@staticmethod
def resolve_links(obj: Pathway):
return obj.d3_json().get("links", [])
@staticmethod
def resolve_nodes(obj: Pathway):
return obj.d3_json().get("nodes", [])
@router.get("/pathway", response={200: PathwayWrapper, 403: Error}) @router.get("/pathway", response={200: PathwayWrapper, 403: Error})
def get_pathways(request): def get_pathways(request):
@ -1420,16 +1654,16 @@ class CreatePathway(Schema):
selectedSetting: str | None = None selectedSetting: str | None = None
@router.post("/package/{uuid:package_uuid}/pathway") @router.post("/package/{uuid:package_uuid}/pathway", response={200: Any, 403: Error})
def create_pathway( def create_package_pathway(
request, request,
package_uuid, package_uuid,
pw: Form[CreatePathway], pw: Form[CreatePathway],
): ):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
stand_smiles = FormatConverter.standardize(pw.smilesinput.strip()) stand_smiles = FormatConverter.standardize(pw.smilesinput.strip(), remove_stereo=True)
new_pw = Pathway.create(p, stand_smiles, name=pw.name, description=pw.description) new_pw = Pathway.create(p, stand_smiles, name=pw.name, description=pw.description)
@ -1447,6 +1681,7 @@ def create_pathway(
setting = SettingManager.get_setting_by_url(request.user, pw.selectedSetting) setting = SettingManager.get_setting_by_url(request.user, pw.selectedSetting)
new_pw.setting = setting new_pw.setting = setting
new_pw.kv.update({"status": "running"})
new_pw.save() new_pw.save()
from .tasks import dispatch, predict from .tasks import dispatch, predict
@ -1455,20 +1690,18 @@ def create_pathway(
return redirect(new_pw.url) return redirect(new_pw.url)
except ValueError as e: except ValueError as e:
return 400, {"message": str(e)} return 403, {"message": str(e)}
@router.delete("/package/{uuid:package_uuid}/pathway/{uuid:pathway_uuid}") @router.delete("/package/{uuid:package_uuid}/pathway/{uuid:pathway_uuid}")
def delete_pathway(request, package_uuid, pathway_uuid): def delete_pathway(request, package_uuid, pathway_uuid):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
pw = Pathway.objects.get(package=p, uuid=pathway_uuid)
pw.delete()
return redirect(f"{p.url}/pathway")
if PackageManager.writable(request.user, p):
pw = Pathway.objects.get(package=p, uuid=pathway_uuid)
pw.delete()
return redirect(f"{p.url}/pathway")
else:
raise ValueError("You do not have the rights to delete this pathway!")
except ValueError: except ValueError:
return 403, { return 403, {
"message": f"Deleting Pathway with id {pathway_uuid} failed due to insufficient rights!" "message": f"Deleting Pathway with id {pathway_uuid} failed due to insufficient rights!"
@ -1576,7 +1809,7 @@ class CreateNode(Schema):
) )
def add_pathway_node(request, package_uuid, pathway_uuid, n: Form[CreateNode]): def add_pathway_node(request, package_uuid, pathway_uuid, n: Form[CreateNode]):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
pw = Pathway.objects.get(package=p, uuid=pathway_uuid) pw = Pathway.objects.get(package=p, uuid=pathway_uuid)
if n.nodeDepth is not None and n.nodeDepth.strip() != "": if n.nodeDepth is not None and n.nodeDepth.strip() != "":
@ -1594,15 +1827,13 @@ def add_pathway_node(request, package_uuid, pathway_uuid, n: Form[CreateNode]):
@router.delete("/package/{uuid:package_uuid}/pathway/{uuid:pathway_uuid}/node/{uuid:node_uuid}") @router.delete("/package/{uuid:package_uuid}/pathway/{uuid:pathway_uuid}/node/{uuid:node_uuid}")
def delete_node(request, package_uuid, pathway_uuid, node_uuid): def delete_node(request, package_uuid, pathway_uuid, node_uuid):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
pw = Pathway.objects.get(package=p, uuid=pathway_uuid)
n = Node.objects.get(pathway=pw, uuid=node_uuid)
n.delete()
return redirect(f"{pw.url}/node")
if PackageManager.writable(request.user, p):
pw = Pathway.objects.get(package=p, uuid=pathway_uuid)
n = Node.objects.get(pathway=pw, uuid=node_uuid)
n.delete()
return redirect(f"{pw.url}/node")
else:
raise ValueError("You do not have the rights to delete this Node!")
except ValueError: except ValueError:
return 403, { return 403, {
"message": f"Deleting Node with id {node_uuid} failed due to insufficient rights!" "message": f"Deleting Node with id {node_uuid} failed due to insufficient rights!"
@ -1632,14 +1863,14 @@ class EdgeSchema(Schema):
id: str = Field(None, alias="url") id: str = Field(None, alias="url")
identifier: str = "edge" identifier: str = "edge"
name: str = Field(None, alias="name") name: str = Field(None, alias="name")
reactionName: str = Field(None, alias="edge_label.name") reactionName: str = Field(None, alias="edge_label.get_name")
reactionURI: str = Field(None, alias="edge_label.url") reactionURI: str = Field(None, alias="edge_label.url")
reviewStatus: str = Field(None, alias="review_status") reviewStatus: str = Field(None, alias="review_status")
scenarios: List["SimpleScenario"] = Field([], alias="scenarios") scenarios: List["SimpleScenario"] = Field([], alias="scenarios")
startNodes: List["EdgeNode"] = Field([], alias="start_nodes") startNodes: List["EdgeNode"] = Field([], alias="start_nodes")
@staticmethod @staticmethod
def resolve_review_status(obj: Node): def resolve_review_status(obj: Edge):
return "reviewed" if obj.pathway.package.reviewed else "unreviewed" return "reviewed" if obj.pathway.package.reviewed else "unreviewed"
@ -1681,12 +1912,12 @@ class CreateEdge(Schema):
@router.post( @router.post(
"/package/{uuid:package_uuid}/üathway/{uuid:pathway_uuid}/edge", "/package/{uuid:package_uuid}/pathway/{uuid:pathway_uuid}/edge",
response={200: str | Any, 403: Error}, response={200: str | Any, 403: Error},
) )
def add_pathway_edge(request, package_uuid, pathway_uuid, e: Form[CreateEdge]): def add_pathway_edge(request, package_uuid, pathway_uuid, e: Form[CreateEdge]):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
pw = Pathway.objects.get(package=p, uuid=pathway_uuid) pw = Pathway.objects.get(package=p, uuid=pathway_uuid)
if e.edgeAsSmirks is None and (e.educts is None or e.products is None): if e.edgeAsSmirks is None and (e.educts is None or e.products is None):
@ -1700,10 +1931,26 @@ def add_pathway_edge(request, package_uuid, pathway_uuid, e: Form[CreateEdge]):
if e.edgeAsSmirks: if e.edgeAsSmirks:
for ed in e.edgeAsSmirks.split(">>")[0].split("\\."): for ed in e.edgeAsSmirks.split(">>")[0].split("\\."):
educts.append(Node.objects.get(pathway=pw, default_node_label__smiles=ed)) stand_ed = FormatConverter.standardize(ed, remove_stereo=True)
educts.append(
Node.objects.get(
pathway=pw,
default_node_label=CompoundStructure.objects.get(
compound__package=p, smiles=stand_ed
).compound.default_structure,
)
)
for pr in e.edgeAsSmirks.split(">>")[1].split("\\."): for pr in e.edgeAsSmirks.split(">>")[1].split("\\."):
products.append(Node.objects.get(pathway=pw, default_node_label__smiles=pr)) stand_pr = FormatConverter.standardize(pr, remove_stereo=True)
products.append(
Node.objects.get(
pathway=pw,
default_node_label=CompoundStructure.objects.get(
compound__package=p, smiles=stand_pr
).compound.default_structure,
)
)
else: else:
for ed in e.educts.split(","): for ed in e.educts.split(","):
educts.append(Node.objects.get(pathway=pw, url=ed.strip())) educts.append(Node.objects.get(pathway=pw, url=ed.strip()))
@ -1716,7 +1963,7 @@ def add_pathway_edge(request, package_uuid, pathway_uuid, e: Form[CreateEdge]):
start_nodes=educts, start_nodes=educts,
end_nodes=products, end_nodes=products,
rule=None, rule=None,
name=e.name, name=None,
description=e.edgeReason, description=e.edgeReason,
) )
@ -1728,15 +1975,13 @@ def add_pathway_edge(request, package_uuid, pathway_uuid, e: Form[CreateEdge]):
@router.delete("/package/{uuid:package_uuid}/pathway/{uuid:pathway_uuid}/edge/{uuid:edge_uuid}") @router.delete("/package/{uuid:package_uuid}/pathway/{uuid:pathway_uuid}/edge/{uuid:edge_uuid}")
def delete_edge(request, package_uuid, pathway_uuid, edge_uuid): def delete_edge(request, package_uuid, pathway_uuid, edge_uuid):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
pw = Pathway.objects.get(package=p, uuid=pathway_uuid)
e = Edge.objects.get(pathway=pw, uuid=edge_uuid)
e.delete()
return redirect(f"{pw.url}/edge")
if PackageManager.writable(request.user, p):
pw = Pathway.objects.get(package=p, uuid=pathway_uuid)
e = Edge.objects.get(pathway=pw, uuid=edge_uuid)
e.delete()
return redirect(f"{pw.url}/edge")
else:
raise ValueError("You do not have the rights to delete this Edge!")
except ValueError: except ValueError:
return 403, { return 403, {
"message": f"Deleting Edge with id {edge_uuid} failed due to insufficient rights!" "message": f"Deleting Edge with id {edge_uuid} failed due to insufficient rights!"
@ -1753,26 +1998,46 @@ class ModelWrapper(Schema):
class ModelSchema(Schema): class ModelSchema(Schema):
aliases: List[str] = Field([], alias="aliases") aliases: List[str] = Field([], alias="aliases")
description: str = Field(None, alias="description") description: str = Field(None, alias="description")
evalPackages: List["SimplePackage"] = Field([]) evalPackages: List["SimplePackage"] = Field([], alias="eval_packages")
id: str = Field(None, alias="url") id: str = Field(None, alias="url")
identifier: str = "relative-reasoning" identifier: str = "relative-reasoning"
# "info" : { info: dict = Field({}, alias="info")
# "Accuracy (Single-Gen)" : "0.5932962678936605" ,
# "Area under PR-Curve (Single-Gen)" : "0.5654653182134282" ,
# "Area under ROC-Curve (Single-Gen)" : "0.8178302405034772" ,
# "Precision (Single-Gen)" : "0.6978730822873083" ,
# "Probability Threshold" : "0.5" ,
# "Recall/Sensitivity (Single-Gen)" : "0.4484149210261006"
# } ,
name: str = Field(None, alias="name") name: str = Field(None, alias="name")
pathwayPackages: List["SimplePackage"] = Field([]) pathwayPackages: List["SimplePackage"] = Field([], alias="pathway_packages")
reviewStatus: str = Field(None, alias="review_status") reviewStatus: str = Field(None, alias="review_status")
rulePackages: List["SimplePackage"] = Field([]) rulePackages: List["SimplePackage"] = Field([], alias="rule_packages")
scenarios: List["SimpleScenario"] = Field([], alias="scenarios") scenarios: List["SimpleScenario"] = Field([], alias="scenarios")
status: str status: str = Field(None, alias="model_status")
statusMessage: str statusMessage: str = Field(None, alias="status_message")
threshold: str threshold: str = Field(None, alias="threshold")
type: str type: str = Field(None, alias="model_type")
@staticmethod
def resolve_info(obj: EPModel):
return {}
@staticmethod
def resolve_status_message(obj: EPModel):
for k, v in PackageBasedModel.PROGRESS_STATUS_CHOICES.items():
if k == obj.model_status:
return v
return None
@staticmethod
def resolve_threshold(obj: EPModel):
return f"{obj.threshold:.2f}"
@staticmethod
def resolve_model_type(obj: EPModel):
if isinstance(obj, RuleBasedRelativeReasoning):
return "RULEBASED"
elif isinstance(obj, MLRelativeReasoning):
return "ECC"
elif isinstance(obj, EnviFormer):
return "ENVIFORMER"
else:
return None
@router.get("/model", response={200: ModelWrapper, 403: Error}) @router.get("/model", response={200: ModelWrapper, 403: Error})
@ -1809,7 +2074,7 @@ def get_model(request, package_uuid, model_uuid, c: Query[Classify]):
return 400, {"message": "Received empty SMILES"} return 400, {"message": "Received empty SMILES"}
try: try:
stand_smiles = FormatConverter.standardize(c.smiles) stand_smiles = FormatConverter.standardize(c.smiles, remove_stereo=True)
except ValueError: except ValueError:
return 400, {"message": f'"{c.smiles}" is not a valid SMILES'} return 400, {"message": f'"{c.smiles}" is not a valid SMILES'}
@ -1833,7 +2098,7 @@ def get_model(request, package_uuid, model_uuid, c: Query[Classify]):
if pr.rule: if pr.rule:
res["id"] = pr.rule.url res["id"] = pr.rule.url
res["identifier"] = pr.rule.get_rule_identifier() res["identifier"] = pr.rule.get_rule_identifier()
res["name"] = pr.rule.name res["name"] = pr.rule.get_name()
res["reviewStatus"] = ( res["reviewStatus"] = (
"reviewed" if pr.rule.package.reviewed else "unreviewed" "reviewed" if pr.rule.package.reviewed else "unreviewed"
) )
@ -1852,14 +2117,11 @@ def get_model(request, package_uuid, model_uuid, c: Query[Classify]):
@router.delete("/package/{uuid:package_uuid}/model/{uuid:model_uuid}") @router.delete("/package/{uuid:package_uuid}/model/{uuid:model_uuid}")
def delete_model(request, package_uuid, model_uuid): def delete_model(request, package_uuid, model_uuid):
try: try:
p = PackageManager.get_package_by_id(request.user, package_uuid) p = get_package_for_write(request.user, package_uuid)
if PackageManager.writable(request.user, p): m = EPModel.objects.get(package=p, uuid=model_uuid)
m = EPModel.objects.get(package=p, uuid=model_uuid) m.delete()
m.delete() return redirect(f"{p.url}/model")
return redirect(f"{p.url}/model")
else:
raise ValueError("You do not have the rights to delete this Model!")
except ValueError: except ValueError:
return 403, { return 403, {
"message": f"Deleting Model with id {model_uuid} failed due to insufficient rights!" "message": f"Deleting Model with id {model_uuid} failed due to insufficient rights!"

View File

@ -1,4 +1,3 @@
import json
import logging import logging
import re import re
from typing import Any, Dict, List, Optional, Set, Union, Tuple from typing import Any, Dict, List, Optional, Set, Union, Tuple
@ -11,6 +10,7 @@ from django.db import transaction
from pydantic import ValidationError from pydantic import ValidationError
from epdb.models import ( from epdb.models import (
AdditionalInformation,
Compound, Compound,
CompoundStructure, CompoundStructure,
Edge, Edge,
@ -22,6 +22,7 @@ from epdb.models import (
Node, Node,
Pathway, Pathway,
Permission, Permission,
PropertyPluginModel,
Reaction, Reaction,
Rule, Rule,
Setting, Setting,
@ -194,8 +195,6 @@ class UserManager(object):
if clean_username != username or clean_email != email: if clean_username != username or clean_email != email:
# This will be caught by the try in view.py/register # This will be caught by the try in view.py/register
raise ValueError("Invalid username or password") raise ValueError("Invalid username or password")
# avoid circular import :S
from .tasks import send_registration_mail
extra_fields = {"is_active": not s.ADMIN_APPROVAL_REQUIRED} extra_fields = {"is_active": not s.ADMIN_APPROVAL_REQUIRED}
@ -214,10 +213,6 @@ class UserManager(object):
u.default_package = p u.default_package = p
u.save() u.save()
if not u.is_active:
# send email for verification
send_registration_mail.delay(u.pk)
if set_setting: if set_setting:
u.default_setting = Setting.objects.get(global_default=True) u.default_setting = Setting.objects.get(global_default=True)
u.save() u.save()
@ -639,15 +634,30 @@ class PackageManager(object):
# Stores old_id to new_id # Stores old_id to new_id
mapping = {} mapping = {}
# Stores new_scen_id to old_parent_scen_id
parent_mapping = {}
# Mapping old scen_id to old_obj_id # Mapping old scen_id to old_obj_id
scen_mapping = defaultdict(list) scen_mapping = defaultdict(list)
# Enzymelink Mapping rule_id to enzymelink objects # Enzymelink Mapping rule_id to enzymelink objects
enzyme_mapping = defaultdict(list) enzyme_mapping = defaultdict(list)
# old_parent_id to child
postponed_scens = defaultdict(list)
# Store Scenarios # Store Scenarios
for scenario in data["scenarios"]: for scenario in data["scenarios"]:
skip_scen = False
# Check if parent exists and park this Scenario to convert it later into an
# AdditionalInformation object
for ex in scenario.get("additionalInformationCollection", {}).get(
"additionalInformation", []
):
if ex["name"] == "referringscenario":
postponed_scens[ex["data"]].append(scenario)
skip_scen = True
break
if skip_scen:
continue
scen = Scenario() scen = Scenario()
scen.package = pack scen.package = pack
scen.uuid = UUID(scenario["id"].split("/")[-1]) if keep_ids else uuid4() scen.uuid = UUID(scenario["id"].split("/")[-1]) if keep_ids else uuid4()
@ -660,19 +670,12 @@ class PackageManager(object):
mapping[scenario["id"]] = scen.uuid mapping[scenario["id"]] = scen.uuid
new_add_inf = defaultdict(list)
# TODO Store AI...
for ex in scenario.get("additionalInformationCollection", {}).get( for ex in scenario.get("additionalInformationCollection", {}).get(
"additionalInformation", [] "additionalInformation", []
): ):
name = ex["name"] name = ex["name"]
addinf_data = ex["data"] addinf_data = ex["data"]
# park the parent scen id for now and link it later
if name == "referringscenario":
parent_mapping[scen.uuid] = addinf_data
continue
# Broken eP Data # Broken eP Data
if name == "initialmasssediment" and addinf_data == "missing data": if name == "initialmasssediment" and addinf_data == "missing data":
continue continue
@ -680,17 +683,11 @@ class PackageManager(object):
continue continue
try: try:
res = AdditionalInformationConverter.convert(name, addinf_data) ai = AdditionalInformationConverter.convert(name, addinf_data)
res_cls_name = res.__class__.__name__ AdditionalInformation.create(pack, ai, scenario=scen)
ai_data = json.loads(res.model_dump_json()) except (ValidationError, ValueError):
ai_data["uuid"] = f"{uuid4()}"
new_add_inf[res_cls_name].append(ai_data)
except ValidationError:
logger.error(f"Failed to convert {name} with {addinf_data}") logger.error(f"Failed to convert {name} with {addinf_data}")
scen.additional_information = new_add_inf
scen.save()
print("Scenarios imported...") print("Scenarios imported...")
# Store compounds and its structures # Store compounds and its structures
@ -930,14 +927,46 @@ class PackageManager(object):
print("Pathways imported...") print("Pathways imported...")
# Linking Phase for parent, children in postponed_scens.items():
for child, parent in parent_mapping.items(): for child in children:
child_obj = Scenario.objects.get(uuid=child) for ex in child.get("additionalInformationCollection", {}).get(
parent_obj = Scenario.objects.get(uuid=mapping[parent]) "additionalInformation", []
child_obj.parent = parent_obj ):
child_obj.save() child_id = child["id"]
name = ex["name"]
addinf_data = ex["data"]
if name == "referringscenario":
continue
# Broken eP Data
if name == "initialmasssediment" and addinf_data == "missing data":
continue
if name == "columnheight" and addinf_data == "(2)-(2.5);(6)-(8)":
continue
ai = AdditionalInformationConverter.convert(name, addinf_data)
if child_id not in scen_mapping:
logger.info(
f"{child_id} not found in scen_mapping. Seems like its not attached to any object"
)
print(
f"{child_id} not found in scen_mapping. Seems like its not attached to any object"
)
scen = Scenario.objects.get(uuid=mapping[parent])
mapping[child_id] = scen.uuid
for obj in scen_mapping[child_id]:
_ = AdditionalInformation.create(pack, ai, scen, content_object=obj)
for scen_id, objects in scen_mapping.items(): for scen_id, objects in scen_mapping.items():
new_id = mapping.get(scen_id)
if new_id is None:
logger.warning(f"Could not find mapping for {scen_id}")
print(f"Could not find mapping for {scen_id}")
continue
scen = Scenario.objects.get(uuid=mapping[scen_id]) scen = Scenario.objects.get(uuid=mapping[scen_id])
for o in objects: for o in objects:
o.scenarios.add(scen) o.scenarios.add(scen)
@ -970,6 +999,7 @@ class PackageManager(object):
matches = re.findall(r">(R[0-9]+)<", evidence["evidence"]) matches = re.findall(r">(R[0-9]+)<", evidence["evidence"])
if not matches or len(matches) != 1: if not matches or len(matches) != 1:
logger.warning(f"Could not find reaction id in {evidence['evidence']}") logger.warning(f"Could not find reaction id in {evidence['evidence']}")
print(f"Could not find reaction id in {evidence['evidence']}")
continue continue
e.add_kegg_reaction_id(matches[0]) e.add_kegg_reaction_id(matches[0])
@ -989,7 +1019,6 @@ class PackageManager(object):
print("Fixing Node depths...") print("Fixing Node depths...")
total_pws = Pathway.objects.filter(package=pack).count() total_pws = Pathway.objects.filter(package=pack).count()
for p, pw in enumerate(Pathway.objects.filter(package=pack)): for p, pw in enumerate(Pathway.objects.filter(package=pack)):
print(pw.url)
in_count = defaultdict(lambda: 0) in_count = defaultdict(lambda: 0)
out_count = defaultdict(lambda: 0) out_count = defaultdict(lambda: 0)
@ -1025,7 +1054,6 @@ class PackageManager(object):
if str(prod.uuid) not in seen: if str(prod.uuid) not in seen:
old_depth = prod.depth old_depth = prod.depth
if old_depth != i + 1: if old_depth != i + 1:
print(f"updating depth from {old_depth} to {i + 1}")
prod.depth = i + 1 prod.depth = i + 1
prod.save() prod.save()
@ -1036,7 +1064,7 @@ class PackageManager(object):
if new_level: if new_level:
levels.append(new_level) levels.append(new_level)
print(f"{p + 1}/{total_pws} fixed.") print(f"{p + 1}/{total_pws} fixed.", end="\r")
return pack return pack
@ -1115,19 +1143,23 @@ class SettingManager(object):
description: str = None, description: str = None,
max_nodes: int = None, max_nodes: int = None,
max_depth: int = None, max_depth: int = None,
rule_packages: List[Package] = None, rule_packages: List[Package] | None = None,
model: EPModel = None, model: EPModel = None,
model_threshold: float = None, model_threshold: float = None,
expansion_scheme: ExpansionSchemeChoice = ExpansionSchemeChoice.BFS, expansion_scheme: ExpansionSchemeChoice = ExpansionSchemeChoice.BFS,
property_models: List["PropertyPluginModel"] | None = None,
): ):
new_s = Setting() new_s = Setting()
# Clean for potential XSS # Clean for potential XSS
new_s.name = nh3.clean(name, tags=s.ALLOWED_HTML_TAGS).strip() new_s.name = nh3.clean(name, tags=s.ALLOWED_HTML_TAGS).strip()
new_s.description = nh3.clean(description, tags=s.ALLOWED_HTML_TAGS).strip() new_s.description = nh3.clean(description, tags=s.ALLOWED_HTML_TAGS).strip()
new_s.max_nodes = max_nodes new_s.max_nodes = max_nodes
new_s.max_depth = max_depth new_s.max_depth = max_depth
new_s.model = model new_s.model = model
new_s.model_threshold = model_threshold new_s.model_threshold = model_threshold
new_s.expansion_scheme = expansion_scheme
new_s.save() new_s.save()
@ -1136,6 +1168,11 @@ class SettingManager(object):
new_s.rule_packages.add(r) new_s.rule_packages.add(r)
new_s.save() new_s.save()
if property_models is not None:
for pm in property_models:
new_s.property_models.add(pm)
new_s.save()
usp = UserSettingPermission() usp = UserSettingPermission()
usp.user = user usp.user = user
usp.setting = new_s usp.setting = new_s

View File

@ -8,7 +8,6 @@ from epdb.logic import UserManager, GroupManager, PackageManager, SettingManager
from epdb.models import ( from epdb.models import (
UserSettingPermission, UserSettingPermission,
MLRelativeReasoning, MLRelativeReasoning,
EnviFormer,
Permission, Permission,
User, User,
ExternalDatabase, ExternalDatabase,
@ -231,7 +230,6 @@ class Command(BaseCommand):
package=pack, package=pack,
rule_packages=[mapping["EAWAG-BBD"]], rule_packages=[mapping["EAWAG-BBD"]],
data_packages=[mapping["EAWAG-BBD"]], data_packages=[mapping["EAWAG-BBD"]],
eval_packages=[],
threshold=0.5, threshold=0.5,
name="ECC - BBD - T0.5", name="ECC - BBD - T0.5",
description="ML Relative Reasoning", description="ML Relative Reasoning",
@ -239,7 +237,3 @@ class Command(BaseCommand):
ml_model.build_dataset() ml_model.build_dataset()
ml_model.build_model() ml_model.build_model()
# If available, create EnviFormerModel
if s.ENVIFORMER_PRESENT:
EnviFormer.create(pack, "EnviFormer - T0.5", "EnviFormer Model with Threshold 0.5", 0.5)

View File

@ -0,0 +1,92 @@
from django.conf import settings as s
from django.contrib.auth import get_user_model
from django.core.management.base import BaseCommand, CommandError
from epdb.models import APIToken
class Command(BaseCommand):
help = "Create an API token for a user"
def add_arguments(self, parser):
parser.add_argument(
"--username",
required=True,
help="Username of the user who will own the token",
)
parser.add_argument(
"--name",
required=True,
help="Descriptive name for the token",
)
parser.add_argument(
"--expires-days",
type=int,
default=90,
help="Days until expiration (0 for no expiration)",
)
parser.add_argument(
"--inactive",
action="store_true",
help="Create the token as inactive",
)
parser.add_argument(
"--curl",
action="store_true",
help="Print a curl example using the token",
)
parser.add_argument(
"--base-url",
default=None,
help="Base URL for curl example (default SERVER_URL or http://localhost:8000)",
)
parser.add_argument(
"--endpoint",
default="/api/v1/compounds/",
help="Endpoint path for curl example",
)
def handle(self, *args, **options):
username = options["username"]
name = options["name"]
expires_days = options["expires_days"]
if expires_days < 0:
raise CommandError("--expires-days must be >= 0")
if expires_days == 0:
expires_days = None
user_model = get_user_model()
try:
user = user_model.objects.get(username=username)
except user_model.DoesNotExist as exc:
raise CommandError(f"User not found for username '{username}'") from exc
token, raw_token = APIToken.create_token(user, name=name, expires_days=expires_days)
if options["inactive"]:
token.is_active = False
token.save(update_fields=["is_active"])
self.stdout.write(f"User: {user.username} ({user.email})")
self.stdout.write(f"Token name: {token.name}")
self.stdout.write(f"Token id: {token.id}")
if token.expires_at:
self.stdout.write(f"Expires at: {token.expires_at.isoformat()}")
else:
self.stdout.write("Expires at: never")
self.stdout.write(f"Active: {token.is_active}")
self.stdout.write("Raw token:")
self.stdout.write(raw_token)
if options["curl"]:
base_url = (
options["base_url"] or getattr(s, "SERVER_URL", None) or "http://localhost:8000"
)
endpoint = options["endpoint"]
endpoint = endpoint if endpoint.startswith("/") else f"/{endpoint}"
url = f"{base_url.rstrip('/')}{endpoint}"
curl_cmd = f'curl -H "Authorization: Bearer {raw_token}" "{url}"'
self.stdout.write("Curl:")
self.stdout.write(curl_cmd)

View File

@ -47,7 +47,7 @@ class Command(BaseCommand):
"description": model.description, "description": model.description,
"kv": model.kv, "kv": model.kv,
"data_packages_uuids": [str(p.uuid) for p in model.data_packages.all()], "data_packages_uuids": [str(p.uuid) for p in model.data_packages.all()],
"eval_packages_uuids": [str(p.uuid) for p in model.data_packages.all()], "eval_packages_uuids": [str(p.uuid) for p in model.eval_packages.all()],
"threshold": model.threshold, "threshold": model.threshold,
"eval_results": model.eval_results, "eval_results": model.eval_results,
"multigen_eval": model.multigen_eval, "multigen_eval": model.multigen_eval,

View File

@ -41,9 +41,7 @@ class Command(BaseCommand):
"SequentialRule", "SequentialRule",
"Scenario", "Scenario",
"Setting", "Setting",
"MLRelativeReasoning", "EPModel",
"RuleBasedRelativeReasoning",
"EnviFormer",
"ApplicabilityDomain", "ApplicabilityDomain",
"EnzymeLink", "EnzymeLink",
] ]

View File

@ -0,0 +1,83 @@
import os
import subprocess
from django.conf import settings
from django.core.management import call_command
from django.core.management.base import BaseCommand
class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument(
"-n",
"--name",
type=str,
help="Name of the database to recreate. Default is 'appdb'",
default="appdb",
)
parser.add_argument(
"-d",
"--dump",
type=str,
help="Path to the dump file",
default="./fixtures/db.dump",
)
parser.add_argument(
"-ou",
"--oldurl",
type=str,
help="Old URL, e.g. https://envipath.org/",
default="https://envipath.org/",
)
parser.add_argument(
"-nu",
"--newurl",
type=str,
help="New URL, e.g. http://localhost:8000/",
default="http://localhost:8000/",
)
def handle(self, *args, **options):
dump_file = options["dump"]
if not os.path.exists(dump_file):
raise ValueError(f"Dump file {dump_file} does not exist")
db_name = options["name"]
print(f"Dropping database {db_name} y/n: ", end="")
if input() in "yY":
result = subprocess.run(
["dropdb", db_name],
capture_output=True,
text=True,
)
print(result.stdout)
else:
raise ValueError("Aborted")
print(f"Creating database {db_name}")
result = subprocess.run(
["createdb", db_name],
capture_output=True,
text=True,
)
print(result.stdout)
print(f"Restoring database {db_name} from {dump_file}")
result = subprocess.run(
["pg_restore", "-d", db_name, dump_file, "--no-owner"],
capture_output=True,
text=True,
)
print(result.stdout)
if db_name == settings.DATABASES["default"]["NAME"]:
call_command("localize_urls", "--old", options["oldurl"], "--new", options["newurl"])
else:
print("Skipping localize_urls as database is not the default one.")

View File

@ -0,0 +1,17 @@
# Generated by Django 5.2.7 on 2026-01-19 19:26
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("epdb", "0014_rename_expansion_schema_setting_expansion_scheme"),
]
operations = [
migrations.AddField(
model_name="user",
name="is_reviewer",
field=models.BooleanField(default=False),
),
]

View File

@ -0,0 +1,179 @@
# Generated by Django 5.2.7 on 2026-02-12 09:38
import django.db.models.deletion
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("epdb", "0015_user_is_reviewer"),
]
operations = [
migrations.RemoveField(
model_name="enviformer",
name="model_status",
),
migrations.RemoveField(
model_name="mlrelativereasoning",
name="model_status",
),
migrations.RemoveField(
model_name="rulebasedrelativereasoning",
name="model_status",
),
migrations.AddField(
model_name="epmodel",
name="model_status",
field=models.CharField(
choices=[
("INITIAL", "Initial"),
("INITIALIZING", "Model is initializing."),
("BUILDING", "Model is building."),
(
"BUILT_NOT_EVALUATED",
"Model is built and can be used for predictions, Model is not evaluated yet.",
),
("EVALUATING", "Model is evaluating"),
("FINISHED", "Model has finished building and evaluation."),
("ERROR", "Model has failed."),
],
default="INITIAL",
),
),
migrations.AlterField(
model_name="enviformer",
name="eval_packages",
field=models.ManyToManyField(
blank=True,
related_name="%(app_label)s_%(class)s_eval_packages",
to=settings.EPDB_PACKAGE_MODEL,
verbose_name="Evaluation Packages",
),
),
migrations.AlterField(
model_name="enviformer",
name="rule_packages",
field=models.ManyToManyField(
blank=True,
related_name="%(app_label)s_%(class)s_rule_packages",
to=settings.EPDB_PACKAGE_MODEL,
verbose_name="Rule Packages",
),
),
migrations.AlterField(
model_name="mlrelativereasoning",
name="eval_packages",
field=models.ManyToManyField(
blank=True,
related_name="%(app_label)s_%(class)s_eval_packages",
to=settings.EPDB_PACKAGE_MODEL,
verbose_name="Evaluation Packages",
),
),
migrations.AlterField(
model_name="mlrelativereasoning",
name="rule_packages",
field=models.ManyToManyField(
blank=True,
related_name="%(app_label)s_%(class)s_rule_packages",
to=settings.EPDB_PACKAGE_MODEL,
verbose_name="Rule Packages",
),
),
migrations.AlterField(
model_name="rulebasedrelativereasoning",
name="eval_packages",
field=models.ManyToManyField(
blank=True,
related_name="%(app_label)s_%(class)s_eval_packages",
to=settings.EPDB_PACKAGE_MODEL,
verbose_name="Evaluation Packages",
),
),
migrations.AlterField(
model_name="rulebasedrelativereasoning",
name="rule_packages",
field=models.ManyToManyField(
blank=True,
related_name="%(app_label)s_%(class)s_rule_packages",
to=settings.EPDB_PACKAGE_MODEL,
verbose_name="Rule Packages",
),
),
migrations.CreateModel(
name="PropertyPluginModel",
fields=[
(
"epmodel_ptr",
models.OneToOneField(
auto_created=True,
on_delete=django.db.models.deletion.CASCADE,
parent_link=True,
primary_key=True,
serialize=False,
to="epdb.epmodel",
),
),
("threshold", models.FloatField(default=0.5)),
("eval_results", models.JSONField(blank=True, default=dict, null=True)),
("multigen_eval", models.BooleanField(default=False)),
("plugin_identifier", models.CharField(max_length=255)),
(
"app_domain",
models.ForeignKey(
blank=True,
default=None,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
to="epdb.applicabilitydomain",
),
),
(
"data_packages",
models.ManyToManyField(
blank=True,
related_name="%(app_label)s_%(class)s_data_packages",
to=settings.EPDB_PACKAGE_MODEL,
verbose_name="Data Packages",
),
),
(
"eval_packages",
models.ManyToManyField(
blank=True,
related_name="%(app_label)s_%(class)s_eval_packages",
to=settings.EPDB_PACKAGE_MODEL,
verbose_name="Evaluation Packages",
),
),
(
"rule_packages",
models.ManyToManyField(
blank=True,
related_name="%(app_label)s_%(class)s_rule_packages",
to=settings.EPDB_PACKAGE_MODEL,
verbose_name="Rule Packages",
),
),
],
options={
"abstract": False,
},
bases=("epdb.epmodel",),
),
migrations.AddField(
model_name="setting",
name="property_models",
field=models.ManyToManyField(
blank=True,
related_name="settings",
to="epdb.propertypluginmodel",
verbose_name="Setting Property Models",
),
),
migrations.DeleteModel(
name="PluginModel",
),
]

View File

@ -0,0 +1,93 @@
# Generated by Django 5.2.7 on 2026-02-20 12:02
import django.db.models.deletion
import uuid
from django.conf import settings
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("contenttypes", "0002_remove_content_type_name"),
("epdb", "0016_remove_enviformer_model_status_and_more"),
]
operations = [
migrations.CreateModel(
name="AdditionalInformation",
fields=[
(
"id",
models.BigAutoField(
auto_created=True, primary_key=True, serialize=False, verbose_name="ID"
),
),
("uuid", models.UUIDField(default=uuid.uuid4, editable=False, unique=True)),
("url", models.TextField(null=True, unique=True, verbose_name="URL")),
("kv", models.JSONField(blank=True, default=dict, null=True)),
("type", models.TextField(verbose_name="Additional Information Type")),
("data", models.JSONField(blank=True, default=dict, null=True)),
("object_id", models.PositiveBigIntegerField(blank=True, null=True)),
(
"content_type",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
to="contenttypes.contenttype",
),
),
(
"package",
models.ForeignKey(
on_delete=django.db.models.deletion.CASCADE,
to=settings.EPDB_PACKAGE_MODEL,
verbose_name="Package",
),
),
(
"scenario",
models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="scenario_additional_information",
to="epdb.scenario",
),
),
],
options={
"indexes": [
models.Index(fields=["type"], name="epdb_additi_type_394349_idx"),
models.Index(
fields=["scenario", "type"], name="epdb_additi_scenari_a59edf_idx"
),
models.Index(
fields=["content_type", "object_id"], name="epdb_additi_content_44d4b4_idx"
),
models.Index(
fields=["scenario", "content_type", "object_id"],
name="epdb_additi_scenari_ef2bf5_idx",
),
],
"constraints": [
models.CheckConstraint(
condition=models.Q(
models.Q(("content_type__isnull", True), ("object_id__isnull", True)),
models.Q(("content_type__isnull", False), ("object_id__isnull", False)),
_connector="OR",
),
name="ck_addinfo_gfk_pair",
),
models.CheckConstraint(
condition=models.Q(
("scenario__isnull", False),
("content_type__isnull", False),
_connector="OR",
),
name="ck_addinfo_not_both_null",
),
],
},
),
]

View File

@ -0,0 +1,132 @@
# Generated by Django 5.2.7 on 2026-02-20 12:03
from django.db import migrations
def get_additional_information(scenario):
from envipy_additional_information import registry
from envipy_additional_information.parsers import TypeOfAerationParser
for k, vals in scenario.additional_information.items():
if k == "enzyme":
continue
if k == "SpikeConentration":
k = "SpikeConcentration"
if k == "AerationType":
k = "TypeOfAeration"
for v in vals:
# Per default additional fields are ignored
MAPPING = {c.__name__: c for c in registry.list_models().values()}
try:
inst = MAPPING[k](**v)
except Exception:
if k == "TypeOfAeration":
toa = TypeOfAerationParser()
inst = toa.from_string(v["type"])
# Add uuid to uniquely identify objects for manipulation
if "uuid" in v:
inst.__dict__["uuid"] = v["uuid"]
yield inst
def forward_func(apps, schema_editor):
Scenario = apps.get_model("epdb", "Scenario")
ContentType = apps.get_model("contenttypes", "ContentType")
AdditionalInformation = apps.get_model("epdb", "AdditionalInformation")
bulk = []
related = []
ctype = {o.model: o for o in ContentType.objects.all()}
parents = Scenario.objects.prefetch_related(
"compound_set",
"compoundstructure_set",
"reaction_set",
"rule_set",
"pathway_set",
"node_set",
"edge_set",
).filter(parent__isnull=True)
for i, scenario in enumerate(parents):
print(f"{i + 1}/{len(parents)}", end="\r")
if scenario.parent is not None:
related.append(scenario.parent)
continue
for ai in get_additional_information(scenario):
bulk.append(
AdditionalInformation(
package=scenario.package,
scenario=scenario,
type=ai.__class__.__name__,
data=ai.model_dump(mode="json"),
)
)
print("\n", len(bulk))
related = Scenario.objects.prefetch_related(
"compound_set",
"compoundstructure_set",
"reaction_set",
"rule_set",
"pathway_set",
"node_set",
"edge_set",
).filter(parent__isnull=False)
for i, scenario in enumerate(related):
print(f"{i + 1}/{len(related)}", end="\r")
parent = scenario.parent
# Check to which objects this scenario is attached to
for ai in get_additional_information(scenario):
rel_objs = [
"compound",
"compoundstructure",
"reaction",
"rule",
"pathway",
"node",
"edge",
]
for rel_obj in rel_objs:
for o in getattr(scenario, f"{rel_obj}_set").all():
bulk.append(
AdditionalInformation(
package=scenario.package,
scenario=parent,
type=ai.__class__.__name__,
data=ai.model_dump(mode="json"),
content_type=ctype[rel_obj],
object_id=o.pk,
)
)
print("Start creating additional information objects...")
AdditionalInformation.objects.bulk_create(bulk)
print("Done!")
print(len(bulk))
Scenario.objects.filter(parent__isnull=False).delete()
# Call ai save to fix urls
ais = AdditionalInformation.objects.all()
total = ais.count()
for i, ai in enumerate(ais):
print(f"{i + 1}/{total}", end="\r")
ai.save()
class Migration(migrations.Migration):
dependencies = [
("epdb", "0017_additionalinformation"),
]
operations = [
migrations.RunPython(forward_func, reverse_code=migrations.RunPython.noop),
]

View File

@ -0,0 +1,20 @@
# Generated by Django 5.2.7 on 2026-02-23 08:45
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("epdb", "0018_auto_20260220_1203"),
]
operations = [
migrations.RemoveField(
model_name="scenario",
name="additional_information",
),
migrations.RemoveField(
model_name="scenario",
name="parent",
),
]

View File

@ -0,0 +1,65 @@
# Generated by Django 5.2.7 on 2026-03-09 10:41
import django.db.models.deletion
from django.db import migrations, models
def populate_polymorphic_ctype(apps, schema_editor):
ContentType = apps.get_model("contenttypes", "ContentType")
Compound = apps.get_model("epdb", "Compound")
CompoundStructure = apps.get_model("epdb", "CompoundStructure")
# Update Compound records
compound_ct = ContentType.objects.get_for_model(Compound)
Compound.objects.filter(polymorphic_ctype__isnull=True).update(polymorphic_ctype=compound_ct)
# Update CompoundStructure records
compound_structure_ct = ContentType.objects.get_for_model(CompoundStructure)
CompoundStructure.objects.filter(polymorphic_ctype__isnull=True).update(
polymorphic_ctype=compound_structure_ct
)
def reverse_populate_polymorphic_ctype(apps, schema_editor):
Compound = apps.get_model("epdb", "Compound")
CompoundStructure = apps.get_model("epdb", "CompoundStructure")
Compound.objects.all().update(polymorphic_ctype=None)
CompoundStructure.objects.all().update(polymorphic_ctype=None)
class Migration(migrations.Migration):
dependencies = [
("contenttypes", "0002_remove_content_type_name"),
("epdb", "0019_remove_scenario_additional_information_and_more"),
]
operations = [
migrations.AlterModelOptions(
name="compoundstructure",
options={"base_manager_name": "objects"},
),
migrations.AddField(
model_name="compound",
name="polymorphic_ctype",
field=models.ForeignKey(
editable=False,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="polymorphic_%(app_label)s.%(class)s_set+",
to="contenttypes.contenttype",
),
),
migrations.AddField(
model_name="compoundstructure",
name="polymorphic_ctype",
field=models.ForeignKey(
editable=False,
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="polymorphic_%(app_label)s.%(class)s_set+",
to="contenttypes.contenttype",
),
),
migrations.RunPython(populate_polymorphic_ctype, reverse_populate_polymorphic_ctype),
]

File diff suppressed because it is too large Load Diff

View File

@ -7,10 +7,21 @@ from uuid import uuid4
from celery import shared_task from celery import shared_task
from celery.utils.functional import LRUCache from celery.utils.functional import LRUCache
from django.conf import settings as s from django.conf import settings as s
from django.core.mail import EmailMultiAlternatives
from django.utils import timezone from django.utils import timezone
from epdb.logic import SPathway from epdb.logic import SPathway
from epdb.models import Edge, EPModel, JobLog, Node, Pathway, Rule, Setting, User from epdb.models import (
AdditionalInformation,
Edge,
EPModel,
JobLog,
Node,
Pathway,
Rule,
Setting,
User,
)
from utilities.chem import FormatConverter from utilities.chem import FormatConverter
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -65,15 +76,39 @@ def mul(a, b):
@shared_task(queue="predict") @shared_task(queue="predict")
def predict_simple(model_pk: int, smiles: str): def predict_simple(model_pk: int, smiles: str, *args, **kwargs):
mod = get_ml_model(model_pk) mod = get_ml_model(model_pk)
res = mod.predict(smiles) res = mod.predict(smiles, *args, **kwargs)
return res return res
@shared_task(queue="background") @shared_task(queue="background")
def send_registration_mail(user_pk: int): def send_registration_mail(user_pk: int):
pass u = User.objects.get(id=user_pk)
tpl = """Welcome {username}!,
Thank you for your interest in enviPath.
The public system is intended for non-commercial use only.
We will review your account details and usually activate your account within 24 hours.
Once activated, you will be notified by email.
If we have any questions, we will contact you at this email address.
Best regards,
enviPath team"""
msg = EmailMultiAlternatives(
"Your enviPath account",
tpl.format(username=u.username),
"admin@envipath.org",
[u.email],
bcc=["admin@envipath.org"],
)
msg.send(fail_silently=False)
@shared_task(bind=True, queue="model") @shared_task(bind=True, queue="model")
@ -204,9 +239,28 @@ def predict(
if JobLog.objects.filter(task_id=self.request.id).exists(): if JobLog.objects.filter(task_id=self.request.id).exists():
JobLog.objects.filter(task_id=self.request.id).update(status="SUCCESS", task_result=pw.url) JobLog.objects.filter(task_id=self.request.id).update(status="SUCCESS", task_result=pw.url)
# dispatch property job
compute_properties.delay(pw_pk, pred_setting_pk)
return pw.url return pw.url
@shared_task(bind=True, queue="background")
def compute_properties(self, pathway_pk: int, setting_pk: int):
pw = Pathway.objects.get(id=pathway_pk)
setting = Setting.objects.get(id=setting_pk)
nodes = [n for n in pw.nodes]
smiles = [n.default_node_label.smiles for n in nodes]
for prop_mod in setting.property_models.all():
if prop_mod.instance().is_heavy():
rr = prop_mod.predict_batch(smiles)
for idx, pred in enumerate(rr.result):
n = nodes[idx]
_ = AdditionalInformation.create(pw.package, ai=pred, content_object=n)
@shared_task(bind=True, queue="background") @shared_task(bind=True, queue="background")
def identify_missing_rules( def identify_missing_rules(
self, self,
@ -395,7 +449,7 @@ def batch_predict(
standardized_substrates_and_smiles = [] standardized_substrates_and_smiles = []
for substrate in substrate_and_names: for substrate in substrate_and_names:
try: try:
stand_smiles = FormatConverter.standardize(substrate[0]) stand_smiles = FormatConverter.standardize(substrate[0], remove_stereo=True)
standardized_substrates_and_smiles.append([stand_smiles, substrate[1]]) standardized_substrates_and_smiles.append([stand_smiles, substrate[1]])
except ValueError: except ValueError:
raise ValueError( raise ValueError(

17
epdb/template_registry.py Normal file
View File

@ -0,0 +1,17 @@
from collections import defaultdict
from threading import Lock
_registry = defaultdict(list)
_lock = Lock()
def register_template(slot: str, template_name: str, *, order: int = 100):
item = (order, template_name)
with _lock:
if item not in _registry[slot]:
_registry[slot].append(item)
_registry[slot].sort(key=lambda x: x[0])
def get_templates(slot: str):
return [template_name for _, template_name in _registry.get(slot, [])]

View File

@ -2,6 +2,8 @@ from django import template
from pydantic import AnyHttpUrl, ValidationError from pydantic import AnyHttpUrl, ValidationError
from pydantic.type_adapter import TypeAdapter from pydantic.type_adapter import TypeAdapter
from epdb.template_registry import get_templates
register = template.Library() register = template.Library()
url_adapter = TypeAdapter(AnyHttpUrl) url_adapter = TypeAdapter(AnyHttpUrl)
@ -19,3 +21,8 @@ def is_url(value):
return True return True
except ValidationError: except ValidationError:
return False return False
@register.simple_tag
def epdb_slot_templates(slot):
return get_templates(slot)

View File

@ -209,5 +209,4 @@ urlpatterns = [
re_path(r"^contact$", v.static_contact_support, name="contact_support"), re_path(r"^contact$", v.static_contact_support, name="contact_support"),
re_path(r"^careers$", v.static_careers, name="careers"), re_path(r"^careers$", v.static_careers, name="careers"),
re_path(r"^cite$", v.static_cite, name="cite"), re_path(r"^cite$", v.static_cite, name="cite"),
re_path(r"^legal$", v.static_legal, name="legal"),
] ]

File diff suppressed because it is too large Load Diff

Binary file not shown.

BIN
fixtures/db.dump Normal file

Binary file not shown.

File diff suppressed because one or more lines are too long

Binary file not shown.

View File

@ -76,9 +76,7 @@ def migration(request):
open(s.BASE_DIR / "fixtures" / "migration_status_per_rule.json") open(s.BASE_DIR / "fixtures" / "migration_status_per_rule.json")
) )
else: else:
BBD = Package.objects.get( BBD = Package.objects.get(uuid="32de3cf4-e3e6-4168-956e-32fa5ddb0ce1")
url="http://localhost:8000/package/32de3cf4-e3e6-4168-956e-32fa5ddb0ce1"
)
ALL_SMILES = [ ALL_SMILES = [
cs.smiles for cs in CompoundStructure.objects.filter(compound__package=BBD) cs.smiles for cs in CompoundStructure.objects.filter(compound__package=BBD)
] ]
@ -147,7 +145,7 @@ def migration_detail(request, package_uuid, rule_uuid):
if request.method == "GET": if request.method == "GET":
context = get_base_context(request) context = get_base_context(request)
BBD = Package.objects.get(name="EAWAG-BBD") BBD = Package.objects.get(uuid="32de3cf4-e3e6-4168-956e-32fa5ddb0ce1")
STRUCTURES = CompoundStructure.objects.filter(compound__package=BBD) STRUCTURES = CompoundStructure.objects.filter(compound__package=BBD)
rule = Rule.objects.get(package=BBD, uuid=rule_uuid) rule = Rule.objects.get(package=BBD, uuid=rule_uuid)

View File

@ -8,14 +8,14 @@
"build": "tailwindcss -i static/css/input.css -o static/css/output.css --minify" "build": "tailwindcss -i static/css/input.css -o static/css/output.css --minify"
}, },
"devDependencies": { "devDependencies": {
"@tailwindcss/cli": "^4.1.16", "@tailwindcss/cli": "^4.1.18",
"@tailwindcss/postcss": "^4.1.16", "@tailwindcss/postcss": "^4.1.18",
"daisyui": "^5.4.3", "daisyui": "^5.5.14",
"postcss": "^8.5.6", "postcss": "^8.5.6",
"prettier": "^3.6.2", "prettier": "^3.7.4",
"prettier-plugin-jinja-template": "^2.1.0", "prettier-plugin-jinja-template": "^2.1.0",
"prettier-plugin-tailwindcss": "^0.7.1", "prettier-plugin-tailwindcss": "^0.7.2",
"tailwindcss": "^4.1.16" "tailwindcss": "^4.1.18"
}, },
"keywords": [ "keywords": [
"django", "django",

361
pepper/__init__.py Normal file
View File

@ -0,0 +1,361 @@
import logging
import math
import os
import pickle
from datetime import datetime
from typing import Any, List, Optional
import polars as pl
from pydantic import computed_field
from sklearn.metrics import (
mean_absolute_error,
mean_squared_error,
r2_score,
root_mean_squared_error,
)
from sklearn.model_selection import ShuffleSplit
# Once stable these will be exposed by enviPy-plugins lib
from envipy_additional_information import register # noqa: I001
from bridge.contracts import Property, PropertyType # noqa: I001
from bridge.dto import (
BuildResult,
EnviPyDTO,
EvaluationResult,
PredictedProperty,
RunResult,
) # noqa: I001
from .impl.pepper import Pepper # noqa: I001
logger = logging.getLogger(__name__)
@register("pepperprediction")
class PepperPrediction(PredictedProperty):
mean: float | None
std: float | None
log_mean: float | None
log_std: float | None
@computed_field
@property
def svg(self, xscale="linear", quantiles=(0.01, 0.99), n_points=2000) -> Optional[str]:
import io
import matplotlib.patches as mpatches
import numpy as np
from matplotlib import pyplot as plt
from scipy import stats
"""
Plot the lognormal distribution of chemical half-lives where parameters are
given on a base-10 log scale: log10(half-life) ~ Normal(mu_log10, sigma_log10^2).
Shades:
- x < a in green (Non-persistent)
- a <= x <= b in yellow (Persistent)
- x > b in red (Very persistent)
Legend shows the shaded color and the probability mass in each region.
"""
sigma_log10 = self.log_std
mu_log10 = self.log_mean
if sigma_log10 <= 0:
raise ValueError("sigma_log10 must be > 0")
# Persistent and Very Persistent thresholds in days from REACH (https://doi.org/10.26434/chemrxiv-2025-xmslf)
p = 120
vp = 180
# Convert base-10 log parameters to natural-log parameters for SciPy's lognorm
ln10 = np.log(10.0)
mu_ln = mu_log10 * ln10
sigma_ln = sigma_log10 * ln10
# SciPy parameterization: lognorm(s=sigma_ln, scale=exp(mu_ln))
dist = stats.lognorm(s=sigma_ln, scale=np.exp(mu_ln))
# Exact probabilities
p_green = dist.cdf(p) # P(X < p) prob not persistent
p_yellow = 1.0 - dist.cdf(p) # P (X > p) prob persistent
p_red = 1.0 - dist.cdf(vp) # P(X > vp) prob very persistent
# Plotting range
q_low, q_high = dist.ppf(quantiles)
x_min = max(1e-12, min(q_low, p) * 0.9)
x_max = max(q_high, vp) * 1.1
# Build x-grid (linear days axis)
if xscale == "log":
x = np.logspace(np.log10(x_min), np.log10(x_max), n_points)
else:
x = np.linspace(x_min, x_max, n_points)
y = dist.pdf(x)
# Masks for shading
mask_green = x < p
mask_yellow = (x >= p) & (x <= vp)
mask_red = x > vp
# Plot
fig, ax = plt.subplots(figsize=(9, 5.5))
ax.plot(x, y, color="#1f4e79", lw=2, label="Lognormal PDF")
if np.any(mask_green):
ax.fill_between(x[mask_green], y[mask_green], 0, color="tab:green", alpha=0.3)
if np.any(mask_yellow):
ax.fill_between(x[mask_yellow], y[mask_yellow], 0, color="gold", alpha=0.35)
if np.any(mask_red):
ax.fill_between(x[mask_red], y[mask_red], 0, color="tab:red", alpha=0.3)
# Threshold lines
ax.axvline(p, color="gray", ls="--", lw=1)
ax.axvline(vp, color="gray", ls="--", lw=1)
# Labels & title
ax.set_title(
f"Half-life Distribution (Lognormal)\nlog10 parameters: μ={mu_log10:g}, σ={sigma_log10:g}"
)
ax.set_xlabel("Half-life (days)")
ax.set_ylabel("Probability density")
ax.grid(True, alpha=0.25)
if xscale == "log":
ax.set_xscale("log") # not used in this example, but supported
# Legend with probabilities
patches = [
mpatches.Patch(
color="tab:green",
alpha=0.3,
label=f"Non-persistent (<{p:g} d): {p_green:.2%}",
),
mpatches.Patch(
color="gold",
alpha=0.35,
label=f"Persistent ({p:g}{vp:g} d): {p_yellow:.2%}",
),
mpatches.Patch(
color="tab:red",
alpha=0.3,
label=f"Very persistent (>{vp:g} d): {p_red:.2%}",
),
]
ax.legend(handles=patches, frameon=True)
plt.tight_layout()
# --- Export to SVG string ---
buf = io.StringIO()
fig.savefig(buf, format="svg", bbox_inches="tight")
svg = buf.getvalue()
plt.close(fig)
buf.close()
return svg
class PEPPER(Property):
def identifier(self) -> str:
return "pepper"
def display(self) -> str:
return "PEPPER"
def name(self) -> str:
return "Predict Environmental Pollutant PERsistence"
def requires_rule_packages(self) -> bool:
return False
def requires_data_packages(self) -> bool:
return True
def get_type(self) -> PropertyType:
return PropertyType.HEAVY
def generate_dataset(self, eP: EnviPyDTO) -> pl.DataFrame:
"""
Generates a dataset in the form of a Polars DataFrame containing compound information, including
SMILES strings and logarithmic values of degradation half-lives (dt50).
The dataset is built by iterating over a list of compounds, standardizing SMILES strings, and
calculating the logarithmic mean of the half-life intervals for different environmental scenarios
associated with each compound.
The resulting DataFrame will only include unique rows based on SMILES and logarithmic half-life
values.
Parameters:
eP (EnviPyDTO): An object that provides access to compound data and utility functions for
standardization and retrieval of half-life information.
Returns:
pl.DataFrame: The resulting dataset with unique rows containing compound structure identifiers,
standardized SMILES strings, and logarithmic half-life values.
Raises:
Exception: Exceptions are caught and logged during data processing, specifically when retrieving
half-life information.
Note:
- The logarithmic mean is calculated from the start and end intervals of the dt50 (half-life).
- Compounds not associated with any half-life data are skipped, and errors encountered during processing
are logged without halting the execution.
"""
columns = ["structure_id", "smiles", "dt50_log"]
rows = []
for c in eP.get_compounds():
hls = c.half_lifes()
if len(hls):
stand_smiles = eP.standardize(c.smiles, remove_stereo=True)
for scenario, half_lives in hls.items():
for h in half_lives:
# In the original Pepper code they take the mean of the start and end interval.
half_mean = (h.dt50.start + h.dt50.end) / 2
rows.append([str(c.url), stand_smiles, math.log10(half_mean)])
df = pl.DataFrame(data=rows, schema=columns, orient="row", infer_schema_length=None)
df = df.unique(subset=["smiles", "dt50_log"], keep="any", maintain_order=False)
return df
def save_dataset(self, df: pl.DataFrame, path: str):
with open(path, "wb") as fh:
pickle.dump(df, fh)
def load_dataset(self, path: str) -> pl.DataFrame:
with open(path, "rb") as fh:
return pickle.load(fh)
def build(self, eP: EnviPyDTO, *args, **kwargs) -> BuildResult | None:
logger.info(f"Start building PEPPER {eP.get_context().uuid}")
df = self.generate_dataset(eP)
if df.shape[0] == 0:
raise ValueError("No data found for building model")
p = Pepper()
p, train_ds = p.train_model(df)
ds_store_path = os.path.join(
eP.get_context().work_dir, f"pepper_ds_{eP.get_context().uuid}.pkl"
)
self.save_dataset(train_ds, ds_store_path)
model_store_path = os.path.join(
eP.get_context().work_dir, f"pepper_{eP.get_context().uuid}.pkl"
)
p.save_model(model_store_path)
logger.info(f"Finished building PEPPER {eP.get_context().uuid}")
def run(self, eP: EnviPyDTO, *args, **kwargs) -> RunResult:
load_path = os.path.join(eP.get_context().work_dir, f"pepper_{eP.get_context().uuid}.pkl")
p = Pepper.load_model(load_path)
X_new = [c.smiles for c in eP.get_compounds()]
predictions = p.predict_batch(X_new)
results = []
for p in zip(*predictions):
if p[0] is None or p[1] is None:
result = {"log_mean": None, "mean": None, "log_std": None, "std": None, "svg": None}
else:
result = {
"log_mean": p[0],
"mean": 10 ** p[0],
"log_std": p[1],
"std": 10 ** p[1],
}
results.append(PepperPrediction(**result))
rr = RunResult(
producer=eP.get_context().url,
description=f"Generated at {datetime.now()}",
result=results,
)
return rr
def evaluate(self, eP: EnviPyDTO, *args, **kwargs) -> EvaluationResult | None:
logger.info(f"Start evaluating PEPPER {eP.get_context().uuid}")
load_path = os.path.join(eP.get_context().work_dir, f"pepper_{eP.get_context().uuid}.pkl")
p = Pepper.load_model(load_path)
df = self.generate_dataset(eP)
ds = p.preprocess_data(df)
y_pred = p.predict_batch(ds["smiles"])
# We only need the mean
if isinstance(y_pred, tuple):
y_pred = y_pred[0]
res = self.eval_stats(ds["dt50_bayesian_mean"], y_pred)
logger.info(f"Finished evaluating PEPPER {eP.get_context().uuid}")
return EvaluationResult(data=res)
def build_and_evaluate(self, eP: EnviPyDTO, *args, **kwargs) -> EvaluationResult | None:
logger.info(f"Start evaluating PEPPER {eP.get_context().uuid}")
ds_load_path = os.path.join(
eP.get_context().work_dir, f"pepper_ds_{eP.get_context().uuid}.pkl"
)
ds = self.load_dataset(ds_load_path)
n_splits = kwargs.get("n_splits", 20)
shuff = ShuffleSplit(n_splits=n_splits, test_size=0.1, random_state=42)
fold_metrics: List[dict[str, Any]] = []
for split_id, (train_index, test_index) in enumerate(shuff.split(ds)):
logger.info(f"Evaluation fold {split_id}/{n_splits} PEPPER {eP.get_context().uuid}")
train = ds[train_index]
test = ds[test_index]
model = Pepper()
model.train_model(train, preprocess=False)
features = test[model.descriptors.get_descriptor_names()].rows()
y_pred = model.predict_batch(features, is_smiles=False)
# We only need the mean for eval statistics but mean, std can be returned
if isinstance(y_pred, tuple) or isinstance(y_pred, list):
y_pred = y_pred[0]
# Remove None if they occur
y_true_filtered, y_pred_filtered = [], []
for t, p in zip(test["dt50_bayesian_mean"], y_pred):
if p is None:
continue
y_true_filtered.append(t)
y_pred_filtered.append(p)
if len(y_true_filtered) == 0:
print("Skipping empty fold")
continue
fold_metrics.append(self.eval_stats(y_true_filtered, y_pred_filtered))
logger.info(f"Finished evaluating PEPPER {eP.get_context().uuid}")
return EvaluationResult(data=fold_metrics)
@staticmethod
def eval_stats(y_true, y_pred) -> dict[str, float]:
scores_dic = {
"r2": r2_score(y_true, y_pred),
"mse": mean_squared_error(y_true, y_pred),
"rmse": root_mean_squared_error(y_true, y_pred),
"mae": mean_absolute_error(y_true, y_pred),
}
return scores_dic

0
pepper/impl/__init__.py Normal file
View File

196
pepper/impl/bayesian.py Normal file
View File

@ -0,0 +1,196 @@
import emcee
import numpy as np
from scipy.stats import lognorm, norm
class Bayesian:
def __init__(self, y, comment_list=None):
if comment_list is None:
comment_list = []
self.y = y
self.comment_list = comment_list
# LOQ default settings
self.LOQ_lower = -1 # (2.4 hours)
self.LOQ_upper = 3 # 1000 days
# prior default settings
self.prior_mu_mean = 1.5
self.prior_mu_std = 2
self.prior_sigma_mean = 0.4
self.prior_sigma_std = 0.4
self.lower_limit_sigma = 0.2
# EMCEE defaults
self.nwalkers = 10
self.iterations = 2000
self.burn_in = 100
ndim = 2 # number of dimensions (mean, std)
# backend = emcee.backends.HDFBackend("backend.h5")
# backend.reset(self.nwalkers, ndim)
self.sampler = emcee.EnsembleSampler(self.nwalkers, ndim, self.logPosterior)
self.posterior_mu = None
self.posterior_sigma = None
def get_censored_values_only(self):
censored_values = []
for i, comment in enumerate(self.comment_list):
if comment in ["<", ">"]:
censored_values.append(self.y[i])
elif self.y[i] > self.LOQ_upper or self.y[i] < self.LOQ_lower:
censored_values.append(self.y[i])
return censored_values
# Class functions
def determine_LOQ(self):
"""
Determines if the LOQ is upper or lower, and the value (if not default)
:return: upper_LOQ , lower_LOQ
"""
censored_values = self.get_censored_values_only()
# Find upper LOQ
upper_LOQ = np.nan
# bigger than global LOQ
if max(self.y) >= self.LOQ_upper:
upper_LOQ = self.LOQ_upper
# case if exactly 365 days
elif max(self.y) == 2.562: # 365 days
upper_LOQ = 2.562
self.LOQ_upper = upper_LOQ
# case if "bigger than" indication in comments
elif ">" in self.comment_list:
i = 0
while i < len(self.y):
if self.y[i] == min(censored_values) and self.comment_list[i] == ">":
self.LOQ_upper = self.y[i]
break
i += 1
# Find lower LOQ
lower_LOQ = np.nan
# smaller than global LOQ
if min(self.y) <= self.LOQ_lower:
lower_LOQ = self.LOQ_lower
# case if exactly 1 day
elif min(self.y) == 0: # 1 day
lower_LOQ = 0
self.LOQ_lower = 0
# case if "smaller than" indication in comments
elif "<" in self.comment_list:
i = 0
while i < len(self.y):
if self.y[i] == max(censored_values) and self.comment_list[i] == "<":
self.LOQ_lower = self.y[i]
break
i += 1
return upper_LOQ, lower_LOQ
def logLikelihood(self, theta, sigma):
"""
Likelihood function (the probability of a dataset (mean, std) given the model parameters)
Convert not censored observations into type numeric
:param theta: mean half-life value to be evaluated
:param sigma: std half-life value to be evaluated
:return: log_likelihood
"""
upper_LOQ, lower_LOQ = self.determine_LOQ()
n_censored_upper = 0
n_censored_lower = 0
y_not_cen = []
if np.isnan(upper_LOQ) and np.isnan(lower_LOQ):
y_not_cen = self.y
else:
for i in self.y:
if np.isnan(upper_LOQ) and i >= upper_LOQ: # censor above threshold
n_censored_upper += 1
if np.isnan(lower_LOQ) and i <= lower_LOQ: # censor below threshold
n_censored_lower += 1
else: # do not censor
y_not_cen.append(i)
LL_left_cen = 0
LL_right_cen = 0
LL_not_cen = 0
# likelihood for not censored observations
if n_censored_lower > 0: # loglikelihood for left censored observations
LL_left_cen = n_censored_lower * norm.logcdf(
lower_LOQ, loc=theta, scale=sigma
) # cumulative distribution function CDF
if n_censored_upper > 0: # loglikelihood for right censored observations
LL_right_cen = n_censored_upper * norm.logsf(
upper_LOQ, loc=theta, scale=sigma
) # survival function (1-CDF)
if len(y_not_cen) > 0: # loglikelihood for uncensored values
LL_not_cen = sum(
norm.logpdf(y_not_cen, loc=theta, scale=sigma)
) # probability density function PDF
return LL_left_cen + LL_not_cen + LL_right_cen
def get_prior_probability_sigma(self, sigma):
# convert mean and sd to logspace parameters, to see this formula check
# https://en.wikipedia.org/wiki/Log-normal_distribution under Method of moments section
temp = 1 + (self.prior_sigma_std / self.prior_sigma_mean) ** 2
meanlog = self.prior_sigma_mean / np.sqrt(temp)
sdlog = np.sqrt(np.log(temp))
# calculate of logpdf of sigma
norm_pdf_sigma = lognorm.logpdf(sigma, s=sdlog, loc=self.lower_limit_sigma, scale=meanlog)
return norm_pdf_sigma
def get_prior_probability_theta(self, theta):
norm_pdf_theta = norm.logpdf(theta, loc=self.prior_mu_mean, scale=self.prior_mu_std)
return norm_pdf_theta
def logPrior(self, par):
"""
Obtain prior loglikelihood of [theta, sigma]
:param par: par = [theta,sigma]
:return: loglikelihood
"""
# calculate the mean and standard deviation in the log-space
norm_pdf_mean = self.get_prior_probability_theta(par[0])
norm_pdf_std = self.get_prior_probability_sigma(par[1])
log_norm_pdf = [norm_pdf_mean, norm_pdf_std]
return sum(log_norm_pdf)
def logPosterior(self, par):
"""
Obtain posterior loglikelihood
:param par: [theta, sigma]
:return: posterior loglikelihood
"""
logpri = self.logPrior(par)
if not np.isfinite(logpri):
return -np.inf
loglikelihood = self.logLikelihood(par[0], par[1])
return logpri + loglikelihood
def get_posterior_distribution(self):
"""
Sample posterior distribution and get median of mean and std samples
:return: posterior half-life mean and std
"""
if self.posterior_mu:
return self.posterior_mu, self.posterior_sigma
# Sampler parameters
ndim = 2 # number of dimensions (mean,std)
p0 = abs(np.random.randn(self.nwalkers, ndim)) # only positive starting numbers (for std)
# Sample distribution
self.sampler.run_mcmc(p0, self.iterations)
# get chain and log_prob in one-dimensional array (merged chains with burn-in)
samples = self.sampler.get_chain(flat=True, discard=100)
# get median mean and std
self.posterior_mu = np.median(samples[:, 0])
self.posterior_sigma = np.median(samples[:, 1])
return self.posterior_mu, self.posterior_sigma
# Utility functions
def get_normal_distribution(x, mu, sig):
return np.exp(-np.power(x - mu, 2.0) / (2 * np.power(sig, 2.0)))

View File

@ -0,0 +1,11 @@
GPR:
name: Gaussian Process Regressor
regressor: GaussianProcessRegressor
regressor_params:
normalize_y: True
n_restarts_optimizer: 0
kernel: "ConstantKernel(1.0, (1e-3, 1e3)) * Matern(length_scale=2.5, length_scale_bounds=(1e-3, 1e3), nu=0.5)"
feature_reduction_method: None
feature_reduction_parameters:
pca:
n_components: 34

View File

@ -0,0 +1,60 @@
from abc import ABC, abstractmethod
from typing import List
from mordred import Calculator, descriptors
from padelpy import from_smiles
from rdkit import Chem
class Descriptor(ABC):
@abstractmethod
def get_molecule_descriptors(self, molecule: str) -> List[float | int] | None:
pass
@abstractmethod
def get_descriptor_names(self) -> List[str]:
pass
class Mordred(Descriptor):
calc = Calculator(descriptors, ignore_3D=True)
def get_molecule_descriptors(self, molecule: str) -> List[float | int] | None:
mol = Chem.MolFromSmiles(molecule)
res = list(self.calc(mol))
return res
def get_descriptor_names(self) -> List[str]:
return [f"Mordred_{i}" for i in range(len(self.calc.descriptors))]
class PaDEL(Descriptor):
calc = Calculator(descriptors)
def get_molecule_descriptors(self, molecule: str) -> List[float | int] | None:
try:
padel_descriptors = from_smiles(molecule, threads=1)
except RuntimeError:
return []
formatted = []
for k, v in padel_descriptors.items():
try:
formatted.append(float(v))
except ValueError:
formatted.append(0.0)
return formatted
def get_descriptor_names(self) -> List[str]:
return [f"PaDEL_{i}" for i in range(1875)]
if __name__ == "__main__":
mol = "CC1=CC(O)=CC=C1[N+](=O)[O-]"
m = Mordred()
print(list(m.get_molecule_descriptors(mol)))
p = PaDEL()
print(list(p.get_molecule_descriptors(mol)))

329
pepper/impl/pepper.py Normal file
View File

@ -0,0 +1,329 @@
import importlib.resources
import logging
import math
import os
import pickle
from collections import defaultdict
from typing import List
import numpy as np
import polars as pl
import yaml
from joblib import Parallel, delayed
from scipy.cluster import hierarchy
from scipy.spatial.distance import squareform
from scipy.stats import spearmanr
from sklearn.feature_selection import VarianceThreshold
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import FunctionTransformer, MinMaxScaler
from .bayesian import Bayesian
from .descriptors import Mordred
class Pepper:
def __init__(self, config_path=None, random_state=42):
self.random_state = random_state
if config_path is None:
config_path = importlib.resources.files("pepper.impl.config").joinpath(
"regressor_settings_singlevalue_soil_paper_GPR_optimized.yml"
)
with open(config_path, "r") as file:
regressor_settings = yaml.safe_load(file)
if len(regressor_settings) > 1:
logging.warning(
f"More than one regressor config found in {config_path}, using the first one"
)
self.regressor_settings = regressor_settings[list(regressor_settings.keys())[0]]
if "kernel" in self.regressor_settings["regressor_params"]:
from sklearn.gaussian_process.kernels import ConstantKernel, Matern # noqa: F401
# We could hard-code the kernels they have, maybe better than using eval
self.regressor_settings["regressor_params"]["kernel"] = eval(
self.regressor_settings["regressor_params"]["kernel"]
)
# We assume the YAML has the key regressor containing a regressor name
self.regressor = self.get_regressor_by_name(self.regressor_settings["regressor"])
if "regressor_params" in self.regressor_settings: # Set params if any are given
self.regressor.set_params(**self.regressor_settings["regressor_params"])
# TODO we could make this configurable
self.descriptors = Mordred()
self.descriptor_subset = None
self.min_max_scaler = MinMaxScaler().set_output(transform="polars")
self.feature_preselector = Pipeline(
[
(
"variance_threshold",
VarianceThreshold(threshold=0.02).set_output(transform="polars"),
),
# Feature selection based on variance threshold
(
"custom_feature_selection",
FunctionTransformer(
func=self.remove_highly_correlated_features,
validate=False,
kw_args={"corr_method": "spearman", "cluster_threshold": 0.01},
).set_output(transform="polars"),
),
]
)
def get_regressor_by_name(self, regressor_string):
"""
Load regressor function from a regressor name
:param regressor_string: name of regressor as defined in config file (function name with parentheses)
:return: Regressor object
"""
# if regressor_string == 'RandomForestRegressor':
# return RandomForestRegressor(random_state=self.random_state)
# elif regressor_string == 'GradientBoostingRegressor':
# return GradientBoostingRegressor(random_state=self.random_state)
# elif regressor_string == 'AdaBoostRegressor':
# return AdaBoostRegressor(random_state=self.random_state)
# elif regressor_string == 'MLPRegressor':
# return MLPRegressor(random_state=self.random_state)
# elif regressor_string == 'SVR':
# return SVR()
# elif regressor_string == 'KNeighborsRegressor':
# return KNeighborsRegressor()
if regressor_string == "GaussianProcessRegressor":
return GaussianProcessRegressor(random_state=self.random_state)
# elif regressor_string == 'DecisionTreeRegressor':
# return DecisionTreeRegressor(random_state=self.random_state)
# elif regressor_string == 'Ridge':
# return Ridge(random_state=self.random_state)
# elif regressor_string == 'SGDRegressor':
# return SGDRegressor(random_state=self.random_state)
# elif regressor_string == 'KernelRidge':
# return KernelRidge()
# elif regressor_string == 'LinearRegression':
# return LinearRegression()
# elif regressor_string == 'LSVR':
# return SVR(kernel='linear') # Linear Support Vector Regressor
else:
raise NotImplementedError(
f"No regressor type defined for regressor_string = {regressor_string}"
)
def train_model(self, train_data, preprocess=True):
"""
Fit self.regressor and preprocessors. train_data is a pl.DataFrame
"""
if preprocess:
# Compute the mean and std of half-lives per structure
train_data = self.preprocess_data(train_data)
# train_data structure:
# columns = [
# "structure_id",
# "smiles",
# "dt50_log",
# "dt50_bayesian_mean",
# "dt50_bayesian_std",
# ] + self.descriptors.get_descriptor_names()
# only select descriptor features for feature preselector
df = train_data[self.descriptors.get_descriptor_names()]
# Remove columns having at least None, nan, inf, "" value
df = Pepper.keep_clean_columns(df)
# Scale and Remove highly correlated features as well as features having a low variance
x_train_normal = self.min_max_scaler.fit_transform(df)
x_train_normal = self.feature_preselector.fit_transform(x_train_normal)
# Store subset, as this is the input used for prediction
self.descriptor_subset = x_train_normal.columns
y_train = train_data["dt50_bayesian_mean"].to_numpy()
y_train_std = train_data["dt50_bayesian_std"].to_numpy()
self.regressor.set_params(alpha=y_train_std)
self.regressor.fit(x_train_normal, y_train)
return self, train_data
@staticmethod
def keep_clean_columns(df: pl.DataFrame) -> pl.DataFrame:
"""
Filters out columns from the DataFrame that contain null values, NaN, or infinite values.
This static method takes a DataFrame as input and evaluates each of its columns to determine
if the column contains invalid values. Columns that have null values, NaN, or infinite values
are excluded from the resulting DataFrame. The method is especially useful for cleaning up a
dataset by keeping only the valid columns.
Parameters:
df (polars.DataFrame): The input DataFrame to be cleaned.
Returns:
polars.DataFrame: A DataFrame containing only columns without null, NaN, or infinite values.
"""
valid_cols = []
for col in df.columns:
s = df[col]
# Check nulls
has_null = s.null_count() > 0
# Check NaN and inf only for numeric columns
if s.dtype.is_numeric():
has_nan = s.is_nan().any()
has_inf = s.is_infinite().any()
else:
has_nan = False
has_inf = False
if not (has_null or has_nan or has_inf):
valid_cols.append(col)
return df.select(valid_cols)
def preprocess_data(self, dataset):
groups = [group for group in dataset.group_by("structure_id")]
# Unless explicitly set compute everything serial
if os.environ.get("N_PEPPER_THREADS", 1) > 1:
results = Parallel(n_jobs=os.environ["N_PEPPER_THREADS"])(
delayed(compute_bayes_per_group)(group[1])
for group in dataset.group_by("structure_id")
)
else:
results = []
for g in groups:
results.append(compute_bayes_per_group(g[1]))
bayes_stats = pl.concat(results, how="vertical")
dataset = dataset.join(bayes_stats, on="structure_id", how="left")
# Remove duplicates after calculating mean, std
dataset = dataset.unique(subset="structure_id")
# Calculate and normalise features, make a "desc" column with the features
dataset = dataset.with_columns(
pl.col("smiles")
.map_elements(
self.descriptors.get_molecule_descriptors, return_dtype=pl.List(pl.Float64)
)
.alias("desc")
)
# If a SMILES fails to get desc it is removed
dataset = dataset.filter(pl.col("desc").is_not_null() & (pl.col("desc").list.len() > 0))
# Flatten the features into the dataset
dataset = dataset.with_columns(
pl.col("desc").list.to_struct(fields=self.descriptors.get_descriptor_names())
).unnest("desc")
return dataset
def predict_batch(self, batch: List[str], is_smiles: bool = True) -> List[List[float | None]]:
if is_smiles:
rows = [self.descriptors.get_molecule_descriptors(smiles) for smiles in batch]
else:
rows = batch
# Create Dataframe with all descriptors
initial_desc_rows_df = pl.DataFrame(
data=rows, schema=self.descriptors.get_descriptor_names(), orient="row"
)
# Before checking for invalid values per row, select only required columns
initial_desc_rows_df = initial_desc_rows_df.select(
list(self.min_max_scaler.feature_names_in_)
)
to_pad = []
adjusted_rows = []
for i, row in enumerate(initial_desc_rows_df.rows()):
# neither infs nor nans are found -> rows seems to be valid input
if row and not any(math.isinf(x) for x in row) and not any(math.isnan(x) for x in row):
adjusted_rows.append(row)
else:
to_pad.append(i)
if adjusted_rows:
desc_rows_df = pl.DataFrame(
data=adjusted_rows, schema=list(self.min_max_scaler.feature_names_in_), orient="row"
)
x_normal = self.min_max_scaler.transform(desc_rows_df)
x_normal = x_normal[self.descriptor_subset]
res = self.regressor.predict(x_normal, return_std=True)
# Convert to lists
res = [list(res[0]), list(res[1])]
# If we had rows containing bad input (inf, nan) insert Nones at the correct position
if to_pad:
for i in to_pad:
res[0].insert(i, None)
res[1].insert(i, None)
return res
else:
return [[None] * len(batch), [None] * len(batch)]
@staticmethod
def remove_highly_correlated_features(
X_train,
corr_method: str = "spearman",
cluster_threshold: float = 0.01,
ignore=False,
):
if ignore:
return X_train
# pass
else:
# Using spearmanr from scipy to achieve pandas.corr in polars
corr = spearmanr(X_train, axis=0).statistic
# Ensure the correlation matrix is symmetric
corr = (corr + corr.T) / 2
np.fill_diagonal(corr, 1)
corr = np.nan_to_num(corr)
# code from https://scikit-learn.org/stable/auto_examples/inspection/
# plot_permutation_importance_multicollinear.html
# We convert the correlation matrix to a distance matrix before performing
# hierarchical clustering using Ward's linkage.
distance_matrix = 1 - np.abs(corr)
dist_linkage = hierarchy.ward(squareform(distance_matrix))
cluster_ids = hierarchy.fcluster(dist_linkage, cluster_threshold, criterion="distance")
cluster_id_to_feature_ids = defaultdict(list)
for idx, cluster_id in enumerate(cluster_ids):
cluster_id_to_feature_ids[cluster_id].append(idx)
my_selected_features = [v[0] for v in cluster_id_to_feature_ids.values()]
X_train_sel = X_train[:, my_selected_features]
return X_train_sel
def save_model(self, path):
with open(path, "wb") as save_file:
pickle.dump(self, save_file, protocol=5)
@staticmethod
def load_model(path) -> "Pepper":
with open(path, "rb") as load_file:
return pickle.load(load_file)
def compute_bayes_per_group(group):
"""Get mean and std using bayesian"""
mean, std = Bayesian(group["dt50_log"]).get_posterior_distribution()
return pl.DataFrame(
{
"structure_id": [group["structure_id"][0]],
"dt50_bayesian_mean": [mean],
"dt50_bayesian_std": [std],
}
)

200
pnpm-lock.yaml generated
View File

@ -9,29 +9,29 @@ importers:
.: .:
devDependencies: devDependencies:
'@tailwindcss/cli': '@tailwindcss/cli':
specifier: ^4.1.16 specifier: ^4.1.18
version: 4.1.16 version: 4.1.18
'@tailwindcss/postcss': '@tailwindcss/postcss':
specifier: ^4.1.16 specifier: ^4.1.18
version: 4.1.16 version: 4.1.18
daisyui: daisyui:
specifier: ^5.4.3 specifier: ^5.5.14
version: 5.4.3 version: 5.5.14
postcss: postcss:
specifier: ^8.5.6 specifier: ^8.5.6
version: 8.5.6 version: 8.5.6
prettier: prettier:
specifier: ^3.6.2 specifier: ^3.7.4
version: 3.6.2 version: 3.7.4
prettier-plugin-jinja-template: prettier-plugin-jinja-template:
specifier: ^2.1.0 specifier: ^2.1.0
version: 2.1.0(prettier@3.6.2) version: 2.1.0(prettier@3.7.4)
prettier-plugin-tailwindcss: prettier-plugin-tailwindcss:
specifier: ^0.7.1 specifier: ^0.7.2
version: 0.7.1(prettier@3.6.2) version: 0.7.2(prettier@3.7.4)
tailwindcss: tailwindcss:
specifier: ^4.1.16 specifier: ^4.1.18
version: 4.1.16 version: 4.1.18
packages: packages:
@ -137,69 +137,69 @@ packages:
resolution: {integrity: sha512-dfUnCxiN9H4ap84DvD2ubjw+3vUNpstxa0TneY/Paat8a3R4uQZDLSvWjmznAY/DoahqTHl9V46HF/Zs3F29pg==} resolution: {integrity: sha512-dfUnCxiN9H4ap84DvD2ubjw+3vUNpstxa0TneY/Paat8a3R4uQZDLSvWjmznAY/DoahqTHl9V46HF/Zs3F29pg==}
engines: {node: '>= 10.0.0'} engines: {node: '>= 10.0.0'}
'@tailwindcss/cli@4.1.16': '@tailwindcss/cli@4.1.18':
resolution: {integrity: sha512-dsnANPrh2ZooHyZ/8uJhc9ecpcYtufToc21NY09NS9vF16rxPCjJ8dP7TUAtPqlUJTHSmRkN2hCdoYQIlgh4fw==} resolution: {integrity: sha512-sMZ+lZbDyxwjD2E0L7oRUjJ01Ffjtme5OtjvvnC+cV4CEDcbqzbp25TCpxHj6kWLU9+DlqJOiNgSOgctC2aZmg==}
hasBin: true hasBin: true
'@tailwindcss/node@4.1.16': '@tailwindcss/node@4.1.18':
resolution: {integrity: sha512-BX5iaSsloNuvKNHRN3k2RcCuTEgASTo77mofW0vmeHkfrDWaoFAFvNHpEgtu0eqyypcyiBkDWzSMxJhp3AUVcw==} resolution: {integrity: sha512-DoR7U1P7iYhw16qJ49fgXUlry1t4CpXeErJHnQ44JgTSKMaZUdf17cfn5mHchfJ4KRBZRFA/Coo+MUF5+gOaCQ==}
'@tailwindcss/oxide-android-arm64@4.1.16': '@tailwindcss/oxide-android-arm64@4.1.18':
resolution: {integrity: sha512-8+ctzkjHgwDJ5caq9IqRSgsP70xhdhJvm+oueS/yhD5ixLhqTw9fSL1OurzMUhBwE5zK26FXLCz2f/RtkISqHA==} resolution: {integrity: sha512-dJHz7+Ugr9U/diKJA0W6N/6/cjI+ZTAoxPf9Iz9BFRF2GzEX8IvXxFIi/dZBloVJX/MZGvRuFA9rqwdiIEZQ0Q==}
engines: {node: '>= 10'} engines: {node: '>= 10'}
cpu: [arm64] cpu: [arm64]
os: [android] os: [android]
'@tailwindcss/oxide-darwin-arm64@4.1.16': '@tailwindcss/oxide-darwin-arm64@4.1.18':
resolution: {integrity: sha512-C3oZy5042v2FOALBZtY0JTDnGNdS6w7DxL/odvSny17ORUnaRKhyTse8xYi3yKGyfnTUOdavRCdmc8QqJYwFKA==} resolution: {integrity: sha512-Gc2q4Qhs660bhjyBSKgq6BYvwDz4G+BuyJ5H1xfhmDR3D8HnHCmT/BSkvSL0vQLy/nkMLY20PQ2OoYMO15Jd0A==}
engines: {node: '>= 10'} engines: {node: '>= 10'}
cpu: [arm64] cpu: [arm64]
os: [darwin] os: [darwin]
'@tailwindcss/oxide-darwin-x64@4.1.16': '@tailwindcss/oxide-darwin-x64@4.1.18':
resolution: {integrity: sha512-vjrl/1Ub9+JwU6BP0emgipGjowzYZMjbWCDqwA2Z4vCa+HBSpP4v6U2ddejcHsolsYxwL5r4bPNoamlV0xDdLg==} resolution: {integrity: sha512-FL5oxr2xQsFrc3X9o1fjHKBYBMD1QZNyc1Xzw/h5Qu4XnEBi3dZn96HcHm41c/euGV+GRiXFfh2hUCyKi/e+yw==}
engines: {node: '>= 10'} engines: {node: '>= 10'}
cpu: [x64] cpu: [x64]
os: [darwin] os: [darwin]
'@tailwindcss/oxide-freebsd-x64@4.1.16': '@tailwindcss/oxide-freebsd-x64@4.1.18':
resolution: {integrity: sha512-TSMpPYpQLm+aR1wW5rKuUuEruc/oOX3C7H0BTnPDn7W/eMw8W+MRMpiypKMkXZfwH8wqPIRKppuZoedTtNj2tg==} resolution: {integrity: sha512-Fj+RHgu5bDodmV1dM9yAxlfJwkkWvLiRjbhuO2LEtwtlYlBgiAT4x/j5wQr1tC3SANAgD+0YcmWVrj8R9trVMA==}
engines: {node: '>= 10'} engines: {node: '>= 10'}
cpu: [x64] cpu: [x64]
os: [freebsd] os: [freebsd]
'@tailwindcss/oxide-linux-arm-gnueabihf@4.1.16': '@tailwindcss/oxide-linux-arm-gnueabihf@4.1.18':
resolution: {integrity: sha512-p0GGfRg/w0sdsFKBjMYvvKIiKy/LNWLWgV/plR4lUgrsxFAoQBFrXkZ4C0w8IOXfslB9vHK/JGASWD2IefIpvw==} resolution: {integrity: sha512-Fp+Wzk/Ws4dZn+LV2Nqx3IilnhH51YZoRaYHQsVq3RQvEl+71VGKFpkfHrLM/Li+kt5c0DJe/bHXK1eHgDmdiA==}
engines: {node: '>= 10'} engines: {node: '>= 10'}
cpu: [arm] cpu: [arm]
os: [linux] os: [linux]
'@tailwindcss/oxide-linux-arm64-gnu@4.1.16': '@tailwindcss/oxide-linux-arm64-gnu@4.1.18':
resolution: {integrity: sha512-DoixyMmTNO19rwRPdqviTrG1rYzpxgyYJl8RgQvdAQUzxC1ToLRqtNJpU/ATURSKgIg6uerPw2feW0aS8SNr/w==} resolution: {integrity: sha512-S0n3jboLysNbh55Vrt7pk9wgpyTTPD0fdQeh7wQfMqLPM/Hrxi+dVsLsPrycQjGKEQk85Kgbx+6+QnYNiHalnw==}
engines: {node: '>= 10'} engines: {node: '>= 10'}
cpu: [arm64] cpu: [arm64]
os: [linux] os: [linux]
'@tailwindcss/oxide-linux-arm64-musl@4.1.16': '@tailwindcss/oxide-linux-arm64-musl@4.1.18':
resolution: {integrity: sha512-H81UXMa9hJhWhaAUca6bU2wm5RRFpuHImrwXBUvPbYb+3jo32I9VIwpOX6hms0fPmA6f2pGVlybO6qU8pF4fzQ==} resolution: {integrity: sha512-1px92582HkPQlaaCkdRcio71p8bc8i/ap5807tPRDK/uw953cauQBT8c5tVGkOwrHMfc2Yh6UuxaH4vtTjGvHg==}
engines: {node: '>= 10'} engines: {node: '>= 10'}
cpu: [arm64] cpu: [arm64]
os: [linux] os: [linux]
'@tailwindcss/oxide-linux-x64-gnu@4.1.16': '@tailwindcss/oxide-linux-x64-gnu@4.1.18':
resolution: {integrity: sha512-ZGHQxDtFC2/ruo7t99Qo2TTIvOERULPl5l0K1g0oK6b5PGqjYMga+FcY1wIUnrUxY56h28FxybtDEla+ICOyew==} resolution: {integrity: sha512-v3gyT0ivkfBLoZGF9LyHmts0Isc8jHZyVcbzio6Wpzifg/+5ZJpDiRiUhDLkcr7f/r38SWNe7ucxmGW3j3Kb/g==}
engines: {node: '>= 10'} engines: {node: '>= 10'}
cpu: [x64] cpu: [x64]
os: [linux] os: [linux]
'@tailwindcss/oxide-linux-x64-musl@4.1.16': '@tailwindcss/oxide-linux-x64-musl@4.1.18':
resolution: {integrity: sha512-Oi1tAaa0rcKf1Og9MzKeINZzMLPbhxvm7rno5/zuP1WYmpiG0bEHq4AcRUiG2165/WUzvxkW4XDYCscZWbTLZw==} resolution: {integrity: sha512-bhJ2y2OQNlcRwwgOAGMY0xTFStt4/wyU6pvI6LSuZpRgKQwxTec0/3Scu91O8ir7qCR3AuepQKLU/kX99FouqQ==}
engines: {node: '>= 10'} engines: {node: '>= 10'}
cpu: [x64] cpu: [x64]
os: [linux] os: [linux]
'@tailwindcss/oxide-wasm32-wasi@4.1.16': '@tailwindcss/oxide-wasm32-wasi@4.1.18':
resolution: {integrity: sha512-B01u/b8LteGRwucIBmCQ07FVXLzImWESAIMcUU6nvFt/tYsQ6IHz8DmZ5KtvmwxD+iTYBtM1xwoGXswnlu9v0Q==} resolution: {integrity: sha512-LffYTvPjODiP6PT16oNeUQJzNVyJl1cjIebq/rWWBF+3eDst5JGEFSc5cWxyRCJ0Mxl+KyIkqRxk1XPEs9x8TA==}
engines: {node: '>=14.0.0'} engines: {node: '>=14.0.0'}
cpu: [wasm32] cpu: [wasm32]
bundledDependencies: bundledDependencies:
@ -210,31 +210,31 @@ packages:
- '@emnapi/wasi-threads' - '@emnapi/wasi-threads'
- tslib - tslib
'@tailwindcss/oxide-win32-arm64-msvc@4.1.16': '@tailwindcss/oxide-win32-arm64-msvc@4.1.18':
resolution: {integrity: sha512-zX+Q8sSkGj6HKRTMJXuPvOcP8XfYON24zJBRPlszcH1Np7xuHXhWn8qfFjIujVzvH3BHU+16jBXwgpl20i+v9A==} resolution: {integrity: sha512-HjSA7mr9HmC8fu6bdsZvZ+dhjyGCLdotjVOgLA2vEqxEBZaQo9YTX4kwgEvPCpRh8o4uWc4J/wEoFzhEmjvPbA==}
engines: {node: '>= 10'} engines: {node: '>= 10'}
cpu: [arm64] cpu: [arm64]
os: [win32] os: [win32]
'@tailwindcss/oxide-win32-x64-msvc@4.1.16': '@tailwindcss/oxide-win32-x64-msvc@4.1.18':
resolution: {integrity: sha512-m5dDFJUEejbFqP+UXVstd4W/wnxA4F61q8SoL+mqTypId2T2ZpuxosNSgowiCnLp2+Z+rivdU0AqpfgiD7yCBg==} resolution: {integrity: sha512-bJWbyYpUlqamC8dpR7pfjA0I7vdF6t5VpUGMWRkXVE3AXgIZjYUYAK7II1GNaxR8J1SSrSrppRar8G++JekE3Q==}
engines: {node: '>= 10'} engines: {node: '>= 10'}
cpu: [x64] cpu: [x64]
os: [win32] os: [win32]
'@tailwindcss/oxide@4.1.16': '@tailwindcss/oxide@4.1.18':
resolution: {integrity: sha512-2OSv52FRuhdlgyOQqgtQHuCgXnS8nFSYRp2tJ+4WZXKgTxqPy7SMSls8c3mPT5pkZ17SBToGM5LHEJBO7miEdg==} resolution: {integrity: sha512-EgCR5tTS5bUSKQgzeMClT6iCY3ToqE1y+ZB0AKldj809QXk1Y+3jB0upOYZrn9aGIzPtUsP7sX4QQ4XtjBB95A==}
engines: {node: '>= 10'} engines: {node: '>= 10'}
'@tailwindcss/postcss@4.1.16': '@tailwindcss/postcss@4.1.18':
resolution: {integrity: sha512-Qn3SFGPXYQMKR/UtqS+dqvPrzEeBZHrFA92maT4zijCVggdsXnDBMsPFJo1eArX3J+O+Gi+8pV4PkqjLCNBk3A==} resolution: {integrity: sha512-Ce0GFnzAOuPyfV5SxjXGn0CubwGcuDB0zcdaPuCSzAa/2vII24JTkH+I6jcbXLb1ctjZMZZI6OjDaLPJQL1S0g==}
braces@3.0.3: braces@3.0.3:
resolution: {integrity: sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==} resolution: {integrity: sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==}
engines: {node: '>=8'} engines: {node: '>=8'}
daisyui@5.4.3: daisyui@5.5.14:
resolution: {integrity: sha512-dfDCJnN4utErGoWfElgdEE252FtfHV9Mxj5Dq1+JzUq3nVkluWdF3JYykP0Xy/y/yArnPXYztO1tLNCow3kjmg==} resolution: {integrity: sha512-L47rvw7I7hK68TA97VB8Ee0woHew+/ohR6Lx6Ah/krfISOqcG4My7poNpX5Mo5/ytMxiR40fEaz6njzDi7cuSg==}
detect-libc@1.0.3: detect-libc@1.0.3:
resolution: {integrity: sha512-pGjwhsmsp4kL2RTz08wcOlGN83otlqHeD/Z5T8GXZB+/YcpQ/dgo+lbU8ZsGxV0HIvqqxo9l7mqYwyYMD9bKDg==} resolution: {integrity: sha512-pGjwhsmsp4kL2RTz08wcOlGN83otlqHeD/Z5T8GXZB+/YcpQ/dgo+lbU8ZsGxV0HIvqqxo9l7mqYwyYMD9bKDg==}
@ -245,8 +245,8 @@ packages:
resolution: {integrity: sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==} resolution: {integrity: sha512-Btj2BOOO83o3WyH59e8MgXsxEQVcarkUOpEYrubB0urwnN10yQ364rsiByU11nZlqWYZm05i/of7io4mzihBtQ==}
engines: {node: '>=8'} engines: {node: '>=8'}
enhanced-resolve@5.18.3: enhanced-resolve@5.18.4:
resolution: {integrity: sha512-d4lC8xfavMeBjzGr2vECC3fsGXziXZQyJxD868h2M/mBI3PwAuODxAkLkq5HYuvrPYcUtiLzsTo8U3PgX3Ocww==} resolution: {integrity: sha512-LgQMM4WXU3QI+SYgEc2liRgznaD5ojbmY3sb8LxyguVkIg5FxdpTkvk72te2R38/TGKxH634oLxXRGY6d7AP+Q==}
engines: {node: '>=10.13.0'} engines: {node: '>=10.13.0'}
fill-range@7.1.1: fill-range@7.1.1:
@ -377,8 +377,8 @@ packages:
peerDependencies: peerDependencies:
prettier: ^3.0.0 prettier: ^3.0.0
prettier-plugin-tailwindcss@0.7.1: prettier-plugin-tailwindcss@0.7.2:
resolution: {integrity: sha512-Bzv1LZcuiR1Sk02iJTS1QzlFNp/o5l2p3xkopwOrbPmtMeh3fK9rVW5M3neBQzHq+kGKj/4LGQMTNcTH4NGPtQ==} resolution: {integrity: sha512-LkphyK3Fw+q2HdMOoiEHWf93fNtYJwfamoKPl7UwtjFQdei/iIBoX11G6j706FzN3ymX9mPVi97qIY8328vdnA==}
engines: {node: '>=20.19'} engines: {node: '>=20.19'}
peerDependencies: peerDependencies:
'@ianvs/prettier-plugin-sort-imports': '*' '@ianvs/prettier-plugin-sort-imports': '*'
@ -432,8 +432,8 @@ packages:
prettier-plugin-svelte: prettier-plugin-svelte:
optional: true optional: true
prettier@3.6.2: prettier@3.7.4:
resolution: {integrity: sha512-I7AIg5boAr5R0FFtJ6rCfD+LFsWHp81dolrFD8S79U9tb8Az2nGrJncnMSnys+bpQJfRUzqs9hnA81OAA3hCuQ==} resolution: {integrity: sha512-v6UNi1+3hSlVvv8fSaoUbggEM5VErKmmpGA7Pl3HF8V6uKY7rvClBOJlH6yNwQtfTueNkGVpOv/mtWL9L4bgRA==}
engines: {node: '>=14'} engines: {node: '>=14'}
hasBin: true hasBin: true
@ -441,8 +441,8 @@ packages:
resolution: {integrity: sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA==} resolution: {integrity: sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA==}
engines: {node: '>=0.10.0'} engines: {node: '>=0.10.0'}
tailwindcss@4.1.16: tailwindcss@4.1.18:
resolution: {integrity: sha512-pONL5awpaQX4LN5eiv7moSiSPd/DLDzKVRJz8Q9PgzmAdd1R4307GQS2ZpfiN7ZmekdQrfhZZiSE5jkLR4WNaA==} resolution: {integrity: sha512-4+Z+0yiYyEtUVCScyfHCxOYP06L5Ne+JiHhY2IjR2KWMIWhJOYZKLSGZaP5HkZ8+bY0cxfzwDE5uOmzFXyIwxw==}
tapable@2.3.0: tapable@2.3.0:
resolution: {integrity: sha512-g9ljZiwki/LfxmQADO3dEY1CbpmXT5Hm2fJ+QaGKwSXUylMybePR7/67YW7jOrrvjEgL1Fmz5kzyAjWVWLlucg==} resolution: {integrity: sha512-g9ljZiwki/LfxmQADO3dEY1CbpmXT5Hm2fJ+QaGKwSXUylMybePR7/67YW7jOrrvjEgL1Fmz5kzyAjWVWLlucg==}
@ -535,96 +535,96 @@ snapshots:
'@parcel/watcher-win32-ia32': 2.5.1 '@parcel/watcher-win32-ia32': 2.5.1
'@parcel/watcher-win32-x64': 2.5.1 '@parcel/watcher-win32-x64': 2.5.1
'@tailwindcss/cli@4.1.16': '@tailwindcss/cli@4.1.18':
dependencies: dependencies:
'@parcel/watcher': 2.5.1 '@parcel/watcher': 2.5.1
'@tailwindcss/node': 4.1.16 '@tailwindcss/node': 4.1.18
'@tailwindcss/oxide': 4.1.16 '@tailwindcss/oxide': 4.1.18
enhanced-resolve: 5.18.3 enhanced-resolve: 5.18.4
mri: 1.2.0 mri: 1.2.0
picocolors: 1.1.1 picocolors: 1.1.1
tailwindcss: 4.1.16 tailwindcss: 4.1.18
'@tailwindcss/node@4.1.16': '@tailwindcss/node@4.1.18':
dependencies: dependencies:
'@jridgewell/remapping': 2.3.5 '@jridgewell/remapping': 2.3.5
enhanced-resolve: 5.18.3 enhanced-resolve: 5.18.4
jiti: 2.6.1 jiti: 2.6.1
lightningcss: 1.30.2 lightningcss: 1.30.2
magic-string: 0.30.21 magic-string: 0.30.21
source-map-js: 1.2.1 source-map-js: 1.2.1
tailwindcss: 4.1.16 tailwindcss: 4.1.18
'@tailwindcss/oxide-android-arm64@4.1.16': '@tailwindcss/oxide-android-arm64@4.1.18':
optional: true optional: true
'@tailwindcss/oxide-darwin-arm64@4.1.16': '@tailwindcss/oxide-darwin-arm64@4.1.18':
optional: true optional: true
'@tailwindcss/oxide-darwin-x64@4.1.16': '@tailwindcss/oxide-darwin-x64@4.1.18':
optional: true optional: true
'@tailwindcss/oxide-freebsd-x64@4.1.16': '@tailwindcss/oxide-freebsd-x64@4.1.18':
optional: true optional: true
'@tailwindcss/oxide-linux-arm-gnueabihf@4.1.16': '@tailwindcss/oxide-linux-arm-gnueabihf@4.1.18':
optional: true optional: true
'@tailwindcss/oxide-linux-arm64-gnu@4.1.16': '@tailwindcss/oxide-linux-arm64-gnu@4.1.18':
optional: true optional: true
'@tailwindcss/oxide-linux-arm64-musl@4.1.16': '@tailwindcss/oxide-linux-arm64-musl@4.1.18':
optional: true optional: true
'@tailwindcss/oxide-linux-x64-gnu@4.1.16': '@tailwindcss/oxide-linux-x64-gnu@4.1.18':
optional: true optional: true
'@tailwindcss/oxide-linux-x64-musl@4.1.16': '@tailwindcss/oxide-linux-x64-musl@4.1.18':
optional: true optional: true
'@tailwindcss/oxide-wasm32-wasi@4.1.16': '@tailwindcss/oxide-wasm32-wasi@4.1.18':
optional: true optional: true
'@tailwindcss/oxide-win32-arm64-msvc@4.1.16': '@tailwindcss/oxide-win32-arm64-msvc@4.1.18':
optional: true optional: true
'@tailwindcss/oxide-win32-x64-msvc@4.1.16': '@tailwindcss/oxide-win32-x64-msvc@4.1.18':
optional: true optional: true
'@tailwindcss/oxide@4.1.16': '@tailwindcss/oxide@4.1.18':
optionalDependencies: optionalDependencies:
'@tailwindcss/oxide-android-arm64': 4.1.16 '@tailwindcss/oxide-android-arm64': 4.1.18
'@tailwindcss/oxide-darwin-arm64': 4.1.16 '@tailwindcss/oxide-darwin-arm64': 4.1.18
'@tailwindcss/oxide-darwin-x64': 4.1.16 '@tailwindcss/oxide-darwin-x64': 4.1.18
'@tailwindcss/oxide-freebsd-x64': 4.1.16 '@tailwindcss/oxide-freebsd-x64': 4.1.18
'@tailwindcss/oxide-linux-arm-gnueabihf': 4.1.16 '@tailwindcss/oxide-linux-arm-gnueabihf': 4.1.18
'@tailwindcss/oxide-linux-arm64-gnu': 4.1.16 '@tailwindcss/oxide-linux-arm64-gnu': 4.1.18
'@tailwindcss/oxide-linux-arm64-musl': 4.1.16 '@tailwindcss/oxide-linux-arm64-musl': 4.1.18
'@tailwindcss/oxide-linux-x64-gnu': 4.1.16 '@tailwindcss/oxide-linux-x64-gnu': 4.1.18
'@tailwindcss/oxide-linux-x64-musl': 4.1.16 '@tailwindcss/oxide-linux-x64-musl': 4.1.18
'@tailwindcss/oxide-wasm32-wasi': 4.1.16 '@tailwindcss/oxide-wasm32-wasi': 4.1.18
'@tailwindcss/oxide-win32-arm64-msvc': 4.1.16 '@tailwindcss/oxide-win32-arm64-msvc': 4.1.18
'@tailwindcss/oxide-win32-x64-msvc': 4.1.16 '@tailwindcss/oxide-win32-x64-msvc': 4.1.18
'@tailwindcss/postcss@4.1.16': '@tailwindcss/postcss@4.1.18':
dependencies: dependencies:
'@alloc/quick-lru': 5.2.0 '@alloc/quick-lru': 5.2.0
'@tailwindcss/node': 4.1.16 '@tailwindcss/node': 4.1.18
'@tailwindcss/oxide': 4.1.16 '@tailwindcss/oxide': 4.1.18
postcss: 8.5.6 postcss: 8.5.6
tailwindcss: 4.1.16 tailwindcss: 4.1.18
braces@3.0.3: braces@3.0.3:
dependencies: dependencies:
fill-range: 7.1.1 fill-range: 7.1.1
daisyui@5.4.3: {} daisyui@5.5.14: {}
detect-libc@1.0.3: {} detect-libc@1.0.3: {}
detect-libc@2.1.2: {} detect-libc@2.1.2: {}
enhanced-resolve@5.18.3: enhanced-resolve@5.18.4:
dependencies: dependencies:
graceful-fs: 4.2.11 graceful-fs: 4.2.11
tapable: 2.3.0 tapable: 2.3.0
@ -719,19 +719,19 @@ snapshots:
picocolors: 1.1.1 picocolors: 1.1.1
source-map-js: 1.2.1 source-map-js: 1.2.1
prettier-plugin-jinja-template@2.1.0(prettier@3.6.2): prettier-plugin-jinja-template@2.1.0(prettier@3.7.4):
dependencies: dependencies:
prettier: 3.6.2 prettier: 3.7.4
prettier-plugin-tailwindcss@0.7.1(prettier@3.6.2): prettier-plugin-tailwindcss@0.7.2(prettier@3.7.4):
dependencies: dependencies:
prettier: 3.6.2 prettier: 3.7.4
prettier@3.6.2: {} prettier@3.7.4: {}
source-map-js@1.2.1: {} source-map-js@1.2.1: {}
tailwindcss@4.1.16: {} tailwindcss@4.1.18: {}
tapable@2.3.0: {} tapable@2.3.0: {}

3
pnpm-workspace.yaml Normal file
View File

@ -0,0 +1,3 @@
onlyBuiltDependencies:
- '@parcel/watcher'
- '@tailwindcss/oxide'

View File

@ -19,6 +19,7 @@ dependencies = [
"envipy-plugins", "envipy-plugins",
"epam-indigo>=1.30.1", "epam-indigo>=1.30.1",
"gunicorn>=23.0.0", "gunicorn>=23.0.0",
"jsonref>=1.1.0",
"networkx>=3.4.2", "networkx>=3.4.2",
"psycopg2-binary>=2.9.10", "psycopg2-binary>=2.9.10",
"python-dotenv>=1.1.0", "python-dotenv>=1.1.0",
@ -35,7 +36,7 @@ dependencies = [
[tool.uv.sources] [tool.uv.sources]
enviformer = { git = "ssh://git@git.envipath.com/enviPath/enviformer.git", rev = "v0.1.4" } enviformer = { git = "ssh://git@git.envipath.com/enviPath/enviformer.git", rev = "v0.1.4" }
envipy-plugins = { git = "ssh://git@git.envipath.com/enviPath/enviPy-plugins.git", rev = "v0.1.0" } envipy-plugins = { git = "ssh://git@git.envipath.com/enviPath/enviPy-plugins.git", rev = "v0.1.0" }
envipy-additional-information = { git = "ssh://git@git.envipath.com/enviPath/enviPy-additional-information.git", rev = "v0.1.7" } envipy-additional-information = { git = "ssh://git@git.envipath.com/enviPath/enviPy-additional-information.git", branch = "develop" }
envipy-ambit = { git = "ssh://git@git.envipath.com/enviPath/enviPy-ambit.git" } envipy-ambit = { git = "ssh://git@git.envipath.com/enviPath/enviPy-ambit.git" }
[project.optional-dependencies] [project.optional-dependencies]
@ -50,7 +51,13 @@ dev = [
"pytest-django>=4.11.1", "pytest-django>=4.11.1",
"pytest-cov>=7.0.0", "pytest-cov>=7.0.0",
] ]
pepper-plugin = [
"matplotlib>=3.10.8",
"pyyaml>=6.0.3",
"emcee>=3.1.6",
"mordredcommunity==2.0.7",
"padelpy" # Remove once we're certain we'll go with mordred
]
[tool.ruff] [tool.ruff]
line-length = 100 line-length = 100
@ -85,8 +92,15 @@ build = { sequence = [
], help = "Build frontend assets and collect static files" } ], help = "Build frontend assets and collect static files" }
# Database tasks # Database tasks
db-up = { cmd = "docker compose -f docker-compose.dev.yml up -d", help = "Start PostgreSQL database using Docker Compose" } db-up = { cmd = "docker compose -p envipath -f docker-compose.dev.yml up -d", help = "Start PostgreSQL database using Docker Compose" }
db-down = { cmd = "docker compose -f docker-compose.dev.yml down", help = "Stop PostgreSQL database" } db-down = { cmd = "docker compose -p envipath -f docker-compose.dev.yml down", help = "Stop PostgreSQL database" }
# Celery tasks
celery = { cmd = "celery -A envipath worker -l INFO -Q predict,model,background", help = "Start Celery worker for async task processing" }
celery-dev = { sequence = [
"db-up",
"celery",
], help = "Start database and Celery worker" }
# Frontend tasks # Frontend tasks
js-deps = { cmd = "uv run python scripts/pnpm_wrapper.py install", help = "Install frontend dependencies" } js-deps = { cmd = "uv run python scripts/pnpm_wrapper.py install", help = "Install frontend dependencies" }

View File

@ -11,6 +11,8 @@ import signal
import subprocess import subprocess
import sys import sys
import time import time
import os
import dotenv
def find_pnpm(): def find_pnpm():
@ -65,6 +67,7 @@ class DevServerManager:
bufsize=1, bufsize=1,
) )
self.processes.append((process, description)) self.processes.append((process, description))
print(" ".join(command))
print(f"✓ Started {description} (PID: {process.pid})") print(f"✓ Started {description} (PID: {process.pid})")
return process return process
except Exception as e: except Exception as e:
@ -146,6 +149,7 @@ class DevServerManager:
def main(): def main():
"""Main entry point.""" """Main entry point."""
dotenv.load_dotenv()
manager = DevServerManager() manager = DevServerManager()
manager.register_cleanup() manager.register_cleanup()
@ -174,9 +178,10 @@ def main():
time.sleep(1) time.sleep(1)
# Start Django dev server # Start Django dev server
port = os.environ.get("DJANGO_PORT", "8000")
django_process = manager.start_process( django_process = manager.start_process(
["uv", "run", "python", "manage.py", "runserver"], ["uv", "run", "python", "manage.py", "runserver", f"0:{port}"],
"Django server", f"Django server on port {port}",
shell=False, shell=False,
) )

View File

@ -0,0 +1,379 @@
/**
* Alpine.js Schema Renderer Component
*
* Renders forms dynamically from JSON Schema with RJSF format support.
* Supports uiSchema for widget hints, labels, help text, and field ordering.
*
* Usage:
* <div x-data="schemaRenderer({
* rjsf: { schema: {...}, uiSchema: {...}, formData: {...}, groups: [...] },
* data: { interval: { start: 20, end: 25 } },
* mode: 'view', // 'view' | 'edit'
* endpoint: '/api/v1/scenario/{uuid}/information/temperature/'
* })">
*/
document.addEventListener("alpine:init", () => {
// Global validation error store with context scoping
Alpine.store('validationErrors', {
errors: {},
// Set errors for a specific context (UUID) or globally (no context)
setErrors(errors, context = null) {
if (context) {
// Namespace all field names with context prefix
const namespacedErrors = {};
Object.entries(errors).forEach(([field, messages]) => {
const key = `${context}.${field}`;
namespacedErrors[key] = messages;
});
// Merge into existing errors (preserves other contexts)
this.errors = { ...this.errors, ...namespacedErrors };
} else {
// No context - merge as-is for backward compatibility
this.errors = { ...this.errors, ...errors };
}
},
// Clear errors for a specific context or all errors
clearErrors(context = null) {
if (context) {
// Clear only errors for this context
const newErrors = {};
const prefix = `${context}.`;
Object.keys(this.errors).forEach(key => {
if (!key.startsWith(prefix)) {
newErrors[key] = this.errors[key];
}
});
this.errors = newErrors;
} else {
// Clear all errors
this.errors = {};
}
},
// Clear a specific field, optionally within a context
clearField(fieldName, context = null) {
const key = context ? `${context}.${fieldName}` : fieldName;
if (this.errors[key]) {
delete this.errors[key];
// Trigger reactivity by creating new object
this.errors = { ...this.errors };
}
},
// Check if a field has errors, optionally within a context
hasError(fieldName, context = null) {
const key = context ? `${context}.${fieldName}` : fieldName;
return Array.isArray(this.errors[key]) && this.errors[key].length > 0;
},
// Get errors for a field, optionally within a context
getErrors(fieldName, context = null) {
const key = context ? `${context}.${fieldName}` : fieldName;
return this.errors[key] || [];
}
});
Alpine.data("schemaRenderer", (options = {}) => ({
schema: null,
uiSchema: {},
data: {},
mode: options.mode || "view", // 'view' | 'edit'
endpoint: options.endpoint || "",
loading: false,
error: null,
context: options.context || null, // UUID for items, null for single forms
debugErrors:
options.debugErrors ??
(typeof window !== "undefined" &&
window.location?.search?.includes("debugErrors=1")),
attach_object: options.attach_object || null,
async init() {
if (options.schemaUrl) {
try {
this.loading = true;
const res = await fetch(options.schemaUrl);
if (!res.ok) {
throw new Error(`Failed to load schema: ${res.statusText}`);
}
const rjsf = await res.json();
// RJSF format: {schema, uiSchema, formData, groups}
if (!rjsf.schema) {
throw new Error("Invalid RJSF format: missing schema property");
}
this.schema = rjsf.schema;
this.uiSchema = rjsf.uiSchema || {};
this.data = options.data
? JSON.parse(JSON.stringify(options.data))
: rjsf.formData || {};
} catch (err) {
this.error = err.message;
console.error("Error loading schema:", err);
} finally {
this.loading = false;
}
} else if (options.rjsf) {
// Direct RJSF object passed
if (!options.rjsf.schema) {
throw new Error("Invalid RJSF format: missing schema property");
}
this.schema = options.rjsf.schema;
this.uiSchema = options.rjsf.uiSchema || {};
this.data = options.data
? JSON.parse(JSON.stringify(options.data))
: options.rjsf.formData || {};
}
// Initialize data from formData or options
if (!this.data || Object.keys(this.data).length === 0) {
this.data = {};
}
// Ensure all schema fields are properly initialized
if (this.schema && this.schema.properties) {
for (const [key, propSchema] of Object.entries(
this.schema.properties,
)) {
const widget = this.getWidget(key, propSchema);
if (widget === "interval") {
// Ensure interval fields are objects with start/end
if (!this.data[key] || typeof this.data[key] !== "object") {
this.data[key] = { start: null, end: null };
} else {
// Ensure start and end exist
if (this.data[key].start === undefined)
this.data[key].start = null;
if (this.data[key].end === undefined) this.data[key].end = null;
}
} else if (widget === "timeseries-table") {
// Ensure timeseries fields are arrays
if (!this.data[key] || !Array.isArray(this.data[key])) {
this.data[key] = [];
}
} else if (this.data[key] === undefined) {
// ONLY initialize if truly undefined, not just falsy
// This preserves empty strings, null, 0, false as valid values
if (propSchema.type === "boolean") {
this.data[key] = false;
} else if (
propSchema.type === "number" ||
propSchema.type === "integer"
) {
this.data[key] = null;
} else if (propSchema.enum) {
// For select fields, use null to show placeholder
this.data[key] = null;
} else {
this.data[key] = "";
}
}
// If data[key] exists (even if empty string or null), don't overwrite
}
}
// UX: Clear field errors when fields change (with context)
if (this.mode === "edit" && this.schema?.properties) {
Object.keys(this.schema.properties).forEach((key) => {
this.$watch(
`data.${key}`,
() => {
Alpine.store('validationErrors').clearField(key, this.context);
},
{ deep: true },
);
});
}
},
getWidget(fieldName, fieldSchema) {
// Defensive check: ensure fieldSchema is provided
if (!fieldSchema) return "text";
try {
// Check uiSchema first (RJSF format)
if (
this.uiSchema &&
this.uiSchema[fieldName] &&
this.uiSchema[fieldName]["ui:widget"]
) {
return this.uiSchema[fieldName]["ui:widget"];
}
// Check for interval type (object with start/end properties)
if (
fieldSchema.type === "object" &&
fieldSchema.properties &&
fieldSchema.properties.start &&
fieldSchema.properties.end
) {
return "interval";
}
// Check for measurements array type (timeseries-table widget)
if (
fieldSchema.type === "array" &&
fieldSchema.items?.properties?.timestamp &&
fieldSchema.items?.properties?.value
) {
return "timeseries-table";
}
// Infer from JSON Schema type
if (fieldSchema.enum) return "select";
if (fieldSchema.type === "number" || fieldSchema.type === "integer")
return "number";
if (fieldSchema.type === "boolean") return "checkbox";
return "text";
} catch (e) {
// Fallback to text widget if anything fails
console.warn("Error in getWidget:", e);
return "text";
}
},
getLabel(fieldName, fieldSchema) {
// Check uiSchema (RJSF format)
if (this.uiSchema[fieldName] && this.uiSchema[fieldName]["ui:label"]) {
return this.uiSchema[fieldName]["ui:label"];
}
// Default: format field name
return fieldName
.replace(/_/g, " ")
.replace(/\b\w/g, (c) => c.toUpperCase());
},
getFieldOrder() {
try {
// Get ordered list of field names based on ui:order
if (!this.schema || !this.schema.properties) return [];
// Only include fields that have UI configs
const fields = Object.keys(this.schema.properties).filter(
(fieldName) => this.uiSchema && this.uiSchema[fieldName],
);
// Sort by ui:order if available
return fields.sort((a, b) => {
const orderA = this.uiSchema[a]?.["ui:order"] || "999";
const orderB = this.uiSchema[b]?.["ui:order"] || "999";
return parseInt(orderA) - parseInt(orderB);
});
} catch (e) {
// Return empty array if anything fails to prevent errors
console.warn("Error in getFieldOrder:", e);
return [];
}
},
hasTimeseriesField() {
try {
// Check if any field in the schema is a timeseries-table widget
if (!this.schema || !this.schema.properties) {
return false;
}
return Object.keys(this.schema.properties).some((fieldName) => {
const fieldSchema = this.schema.properties[fieldName];
if (!fieldSchema) return false;
return this.getWidget(fieldName, fieldSchema) === "timeseries-table";
});
} catch (e) {
// Return false if anything fails to prevent errors
console.warn("Error in hasTimeseriesField:", e);
return false;
}
},
async submit() {
if (!this.endpoint) {
console.error("No endpoint specified for submission");
return;
}
this.loading = true;
this.error = null;
try {
const csrftoken =
document.querySelector("[name=csrf-token]")?.content || "";
const res = await fetch(this.endpoint, {
method: "POST",
headers: {
"Content-Type": "application/json",
"X-CSRFToken": csrftoken,
},
body: JSON.stringify(this.data),
});
if (!res.ok) {
let errorData;
try {
errorData = await res.json();
} catch {
errorData = { error: res.statusText };
}
// Handle validation errors (field-level)
Alpine.store('validationErrors').clearErrors();
// Try to parse structured error response
let parsedError = errorData;
// If error is a JSON string, parse it
if (
typeof errorData.error === "string" &&
errorData.error.startsWith("{")
) {
parsedError = JSON.parse(errorData.error);
}
if (parsedError.detail && Array.isArray(parsedError.detail)) {
// Pydantic validation errors format: [{loc: ['field'], msg: '...', type: '...'}]
const fieldErrors = {};
for (const err of parsedError.detail) {
const field =
err.loc && err.loc.length > 0
? err.loc[err.loc.length - 1]
: "root";
if (!fieldErrors[field]) {
fieldErrors[field] = [];
}
fieldErrors[field].push(
err.msg || err.message || "Validation error",
);
}
Alpine.store('validationErrors').setErrors(fieldErrors);
throw new Error(
"Validation failed. Please check the fields below.",
);
} else {
// General error
throw new Error(
parsedError.error ||
parsedError.detail ||
`Request failed: ${res.statusText}`,
);
}
}
// Clear errors on success
Alpine.store('validationErrors').clearErrors();
const result = await res.json();
return result;
} catch (err) {
this.error = err.message;
throw err;
} finally {
this.loading = false;
}
},
}));
});

View File

@ -0,0 +1,472 @@
/**
* Alpine.js Widget Components for Schema Forms
*
* Centralized widget component definitions for dynamic form rendering.
* Each widget receives explicit parameters instead of context object for better traceability.
*/
document.addEventListener("alpine:init", () => {
// Base widget factory with common functionality
const baseWidget = (
fieldName,
data,
schema,
uiSchema,
mode,
debugErrors,
context = null // NEW: context for error namespacing
) => ({
fieldName,
data,
schema,
uiSchema,
mode,
debugErrors,
context, // Store context for use in templates
// Field schema access
get fieldSchema() {
return this.schema?.properties?.[this.fieldName] || {};
},
// Common metadata
get label() {
// Check uiSchema first (RJSF format)
if (this.uiSchema?.[this.fieldName]?.["ui:label"]) {
return this.uiSchema[this.fieldName]["ui:label"];
}
// Fall back to schema title
if (this.fieldSchema.title) {
return this.fieldSchema.title;
}
// Default: format field name
return this.fieldName
.replace(/_/g, " ")
.replace(/\b\w/g, (c) => c.toUpperCase());
},
get helpText() {
return this.fieldSchema.description || "";
},
// Field-level unit extraction from uiSchema (RJSF format)
get unit() {
return this.uiSchema?.[this.fieldName]?.["ui:unit"] || null;
},
// Mode checks
get isViewMode() {
return this.mode === "view";
},
get isEditMode() {
return this.mode === "edit";
},
});
// Text widget
Alpine.data(
"textWidget",
(fieldName, data, schema, uiSchema, mode, debugErrors, context = null) => ({
...baseWidget(
fieldName,
data,
schema,
uiSchema,
mode,
debugErrors,
context,
),
get value() {
return this.data[this.fieldName] || "";
},
set value(v) {
this.data[this.fieldName] = v;
},
}),
);
// Textarea widget
Alpine.data(
"textareaWidget",
(fieldName, data, schema, uiSchema, mode, debugErrors, context = null) => ({
...baseWidget(
fieldName,
data,
schema,
uiSchema,
mode,
debugErrors,
context,
),
get value() {
return this.data[this.fieldName] || "";
},
set value(v) {
this.data[this.fieldName] = v;
},
}),
);
// Number widget with unit support
Alpine.data(
"numberWidget",
(fieldName, data, schema, uiSchema, mode, debugErrors, context = null) => ({
...baseWidget(
fieldName,
data,
schema,
uiSchema,
mode,
debugErrors,
context,
),
get value() {
return this.data[this.fieldName];
},
set value(v) {
this.data[this.fieldName] =
v === "" || v === null ? null : parseFloat(v);
},
get hasValue() {
return (
this.value !== null && this.value !== undefined && this.value !== ""
);
},
// Format value with unit for view mode
get displayValue() {
if (!this.hasValue) return "—";
return this.unit ? `${this.value} ${this.unit}` : String(this.value);
},
}),
);
// Select widget
Alpine.data(
"selectWidget",
(fieldName, data, schema, uiSchema, mode, debugErrors, context = null) => ({
...baseWidget(
fieldName,
data,
schema,
uiSchema,
mode,
debugErrors,
context,
),
get value() {
return this.data[this.fieldName] || "";
},
set value(v) {
this.data[this.fieldName] = v;
},
get multiple() {
return !!(this.fieldSchema.items && this.fieldSchema.items.enum);
},
get options() {
if (this.fieldSchema.enum) {
return this.fieldSchema.enum;
} else if (this.fieldSchema.items && this.fieldSchema.items.enum) {
return this.fieldSchema.items.enum;
} else {
return [];
}
},
}),
);
// Checkbox widget
Alpine.data(
"checkboxWidget",
(fieldName, data, schema, uiSchema, mode, debugErrors, context = null) => ({
...baseWidget(
fieldName,
data,
schema,
uiSchema,
mode,
debugErrors,
context,
),
get checked() {
return !!this.data[this.fieldName];
},
set checked(v) {
this.data[this.fieldName] = v;
},
}),
);
// Interval widget with unit support
Alpine.data(
"intervalWidget",
(fieldName, data, schema, uiSchema, mode, debugErrors, context = null) => ({
...baseWidget(
fieldName,
data,
schema,
uiSchema,
mode,
debugErrors,
context,
),
get start() {
return this.data[this.fieldName]?.start ?? null;
},
set start(v) {
if (!this.data[this.fieldName]) this.data[this.fieldName] = {};
this.data[this.fieldName].start =
v === "" || v === null ? null : parseFloat(v);
},
get end() {
return this.data[this.fieldName]?.end ?? null;
},
set end(v) {
if (!this.data[this.fieldName]) this.data[this.fieldName] = {};
this.data[this.fieldName].end =
v === "" || v === null ? null : parseFloat(v);
},
// Format interval with unit for view mode
get displayValue() {
const s = this.start,
e = this.end;
const unitStr = this.unit ? ` ${this.unit}` : "";
if (s !== null && e !== null) return `${s} ${e}${unitStr}`;
if (s !== null) return `${s}${unitStr}`;
if (e !== null) return `${e}${unitStr}`;
return "—";
},
get isSameValue() {
return this.start !== null && this.start === this.end;
},
// Validation: start must be <= end (client-side)
get hasValidationError() {
if (this.isViewMode) return false;
const s = this.start;
const e = this.end;
// Only validate if both values are provided
if (
s !== null &&
e !== null &&
typeof s === "number" &&
typeof e === "number"
) {
return s > e;
}
return false;
},
}),
);
// PubMed link widget
Alpine.data(
"pubmedWidget",
(fieldName, data, schema, uiSchema, mode, debugErrors, context = null) => ({
...baseWidget(
fieldName,
data,
schema,
uiSchema,
mode,
debugErrors,
context,
),
get value() {
return this.data[this.fieldName] || "";
},
set value(v) {
this.data[this.fieldName] = v;
},
get pubmedUrl() {
return this.value
? `https://pubmed.ncbi.nlm.nih.gov/${this.value}`
: null;
},
}),
);
// Compound link widget
Alpine.data(
"compoundWidget",
(fieldName, data, schema, uiSchema, mode, debugErrors, context = null) => ({
...baseWidget(
fieldName,
data,
schema,
uiSchema,
mode,
debugErrors,
context,
),
get value() {
return this.data[this.fieldName] || "";
},
set value(v) {
this.data[this.fieldName] = v;
},
}),
);
// TimeSeries table widget
Alpine.data(
"timeseriesTableWidget",
(fieldName, data, schema, uiSchema, mode, debugErrors, context = null) => ({
...baseWidget(
fieldName,
data,
schema,
uiSchema,
mode,
debugErrors,
context,
),
chartInstance: null,
// Getter/setter for measurements array
get measurements() {
return this.data[this.fieldName] || [];
},
set measurements(v) {
this.data[this.fieldName] = v;
},
// Get description from sibling field
get description() {
return this.data?.description || "";
},
// Get method from sibling field
get method() {
return this.data?.method || "";
},
// Computed property for chart options
get chartOptions() {
return {
measurements: this.measurements,
xAxisLabel: this.data?.x_axis_label || "Time",
yAxisLabel: this.data?.y_axis_label || "Value",
xAxisUnit: this.data?.x_axis_unit || "",
yAxisUnit: this.data?.y_axis_unit || "",
};
},
// Add new measurement
addMeasurement() {
if (!this.data[this.fieldName]) {
this.data[this.fieldName] = [];
}
this.data[this.fieldName].push({
timestamp: null,
value: null,
error: null,
note: "",
});
},
// Remove measurement by index
removeMeasurement(index) {
if (
this.data[this.fieldName] &&
Array.isArray(this.data[this.fieldName])
) {
this.data[this.fieldName].splice(index, 1);
}
},
// Update specific measurement field
updateMeasurement(index, field, value) {
if (this.data[this.fieldName] && this.data[this.fieldName][index]) {
if (field === "timestamp" || field === "value" || field === "error") {
// Parse all numeric fields (timestamp is days as float)
this.data[this.fieldName][index][field] =
value === "" || value === null ? null : parseFloat(value);
} else {
// Store other fields as-is
this.data[this.fieldName][index][field] = value;
}
}
},
// Format timestamp for display (timestamp is numeric days as float)
formatTimestamp(timestamp) {
return timestamp ?? "";
},
// Sort by timestamp (numeric days)
sortByTimestamp() {
if (
this.data[this.fieldName] &&
Array.isArray(this.data[this.fieldName])
) {
this.data[this.fieldName].sort((a, b) => {
const tsA = a.timestamp ?? Infinity;
const tsB = b.timestamp ?? Infinity;
return tsA - tsB;
});
}
},
// Chart lifecycle methods (delegates to TimeSeriesChart utility)
initChart() {
if (!this.isViewMode || !window.Chart || !window.TimeSeriesChart)
return;
const canvas = this.$refs?.chartCanvas;
if (!canvas) return;
this.destroyChart();
if (this.measurements.length === 0) return;
this.chartInstance = window.TimeSeriesChart.create(
canvas,
this.chartOptions,
);
},
updateChart() {
if (!this.chartInstance || !this.isViewMode) return;
window.TimeSeriesChart.update(
this.chartInstance,
this.measurements,
this.chartOptions,
);
},
destroyChart() {
if (this.chartInstance) {
window.TimeSeriesChart.destroy(this.chartInstance);
this.chartInstance = null;
}
},
// Alpine lifecycle hooks
init() {
if (this.isViewMode && window.Chart) {
// Use $nextTick to ensure DOM is ready
this.$nextTick(() => {
this.initChart();
});
// Watch measurements array for changes and update chart
this.$watch("data." + this.fieldName, () => {
if (this.chartInstance) {
this.updateChart();
}
});
}
},
}),
);
});

View File

@ -21,6 +21,7 @@ document.addEventListener('alpine:init', () => {
Alpine.data('pathwayViewer', (config) => ({ Alpine.data('pathwayViewer', (config) => ({
status: config.status, status: config.status,
modified: config.modified, modified: config.modified,
modifiedDate: null,
statusUrl: config.statusUrl, statusUrl: config.statusUrl,
emptyDueToThreshold: config.emptyDueToThreshold === "True", emptyDueToThreshold: config.emptyDueToThreshold === "True",
showUpdateNotice: false, showUpdateNotice: false,
@ -39,6 +40,8 @@ document.addEventListener('alpine:init', () => {
}, },
init() { init() {
this.modifiedDate = this.parseDate(this.modified);
if (this.status === 'running') { if (this.status === 'running') {
this.startPolling(); this.startPolling();
} }
@ -66,26 +69,39 @@ document.addEventListener('alpine:init', () => {
this.showEmptyDueToThresholdNotice = true; this.showEmptyDueToThresholdNotice = true;
} }
if (data.modified > this.modified) { const nextModifiedDate = this.parseDate(data.modified);
if (!this.emptyDueToThreshold) { const modifiedChanged = this.hasNewerTimestamp(nextModifiedDate, this.modifiedDate);
this.showUpdateNotice = true; const statusChanged = data.status !== this.status;
this.updateMessage = this.getUpdateMessage(data.status);
} if ((modifiedChanged || statusChanged) && !this.emptyDueToThreshold) {
this.showUpdateNotice = true;
this.updateMessage = this.getUpdateMessage(data.status, modifiedChanged, statusChanged);
} }
if (data.status !== 'running') { this.modified = data.modified;
this.status = data.status; this.modifiedDate = nextModifiedDate;
if (this.pollInterval) { this.status = data.status;
clearInterval(this.pollInterval);
this.pollInterval = null; if (data.status !== 'running' && this.pollInterval) {
} clearInterval(this.pollInterval);
this.pollInterval = null;
} }
} catch (err) { } catch (err) {
console.error('Polling error:', err); console.error('Polling error:', err);
} }
}, },
getUpdateMessage(status) { getUpdateMessage(status, modifiedChanged, statusChanged) {
// Prefer explicit status change messaging, otherwise fall back to modified change copy
if (statusChanged) {
if (status === 'completed') {
return 'Prediction completed. Reload the page to see the updated Pathway.';
}
if (status === 'failed') {
return 'Prediction failed. Reload the page to see the latest status.';
}
}
let msg = 'Prediction '; let msg = 'Prediction ';
if (status === 'running') { if (status === 'running') {
@ -99,6 +115,18 @@ document.addEventListener('alpine:init', () => {
return msg; return msg;
}, },
parseDate(dateString) {
// Normalize "YYYY-MM-DD HH:mm:ss" into an ISO-compatible string to avoid locale issues
if (!dateString) return null;
return new Date(dateString.replace(' ', 'T'));
},
hasNewerTimestamp(nextDate, currentDate) {
if (!nextDate) return false;
if (!currentDate) return true;
return nextDate.getTime() > currentDate.getTime();
},
reloadPage() { reloadPage() {
location.reload(); location.reload();
} }

View File

@ -0,0 +1,419 @@
/**
* Unified API client for Additional Information endpoints
* Provides consistent error handling, logging, and CRUD operations
*/
window.AdditionalInformationApi = {
// Configuration
_debug: false,
/**
* Enable or disable debug logging
* @param {boolean} enabled - Whether to enable debug mode
*/
setDebug(enabled) {
this._debug = enabled;
},
/**
* Internal logging helper
* @private
*/
_log(action, data) {
if (this._debug) {
console.log(`[AdditionalInformationApi] ${action}:`, data);
}
},
//FIXME: this has the side effect of users not being able to explicitly set an empty string for a field.
/**
* Remove empty strings from payload recursively
* @param {any} value
* @returns {any}
*/
sanitizePayload(value) {
if (Array.isArray(value)) {
return value
.map((item) => this.sanitizePayload(item))
.filter((item) => item !== "");
}
if (value && typeof value === "object") {
const cleaned = {};
for (const [key, item] of Object.entries(value)) {
if (item === "") continue;
cleaned[key] = this.sanitizePayload(item);
}
return cleaned;
}
return value;
},
/**
* Get CSRF token from meta tag
* @returns {string} CSRF token
*/
getCsrfToken() {
return document.querySelector("[name=csrf-token]")?.content || "";
},
/**
* Build headers for API requests
* @private
*/
_buildHeaders(includeContentType = true) {
const headers = {
"X-CSRFToken": this.getCsrfToken(),
};
if (includeContentType) {
headers["Content-Type"] = "application/json";
}
return headers;
},
/**
* Handle API response with consistent error handling
* @private
*/
async _handleResponse(response, action) {
if (!response.ok) {
let errorData;
try {
errorData = await response.json();
} catch {
errorData = { error: response.statusText };
}
// Try to parse the error if it's a JSON string
let parsedError = errorData;
const errorStr = errorData.detail || errorData.error;
if (typeof errorStr === "string" && errorStr.startsWith("{")) {
try {
parsedError = JSON.parse(errorStr);
} catch {
// Not JSON, use as-is
}
}
// If it's a structured validation error, throw with field errors
if (parsedError.type === "validation_error" && parsedError.field_errors) {
this._log(`${action} VALIDATION ERROR`, parsedError);
const error = new Error(parsedError.message || "Validation failed");
error.fieldErrors = parsedError.field_errors;
error.isValidationError = true;
throw error;
}
// General error
const errorMsg =
parsedError.message ||
parsedError.error ||
parsedError.detail ||
`${action} failed: ${response.statusText}`;
this._log(`${action} ERROR`, {
status: response.status,
error: errorMsg,
});
throw new Error(errorMsg);
}
const data = await response.json();
this._log(`${action} SUCCESS`, data);
return data;
},
/**
* Load all available schemas
* @returns {Promise<Object>} Object with schema definitions
*/
async loadSchemas() {
this._log("loadSchemas", "Starting...");
const response = await fetch("/api/v1/information/schema/");
return this._handleResponse(response, "loadSchemas");
},
/**
* Load additional information items for a scenario
* @param {string} scenarioUuid - UUID of the scenario
* @returns {Promise<Array>} Array of additional information items
*/
async loadItems(scenarioUuid) {
this._log("loadItems", { scenarioUuid });
const response = await fetch(
`/api/v1/scenario/${scenarioUuid}/information/`,
);
return this._handleResponse(response, "loadItems");
},
/**
* Load both schemas and items in parallel
* @param {string} scenarioUuid - UUID of the scenario
* @returns {Promise<{schemas: Object, items: Array}>}
*/
async loadSchemasAndItems(scenarioUuid) {
this._log("loadSchemasAndItems", { scenarioUuid });
const [schemas, items] = await Promise.all([
this.loadSchemas(),
this.loadItems(scenarioUuid),
]);
return { schemas, items };
},
/**
* Create new additional information for a scenario
* @param {string} scenarioUuid - UUID of the scenario
* @param {string} modelName - Name/type of the additional information model
* @param {Object} data - Data for the new item
* @returns {Promise<{status: string, uuid: string}>}
*/
async createItem(scenarioUuid, modelName, data) {
const sanitizedData = this.sanitizePayload(data);
this._log("createItem", { scenarioUuid, modelName, data: sanitizedData });
// Normalize model name to lowercase
const normalizedName = modelName.toLowerCase();
const response = await fetch(
`/api/v1/scenario/${scenarioUuid}/information/${normalizedName}/`,
{
method: "POST",
headers: this._buildHeaders(),
body: JSON.stringify(sanitizedData),
},
);
return this._handleResponse(response, "createItem");
},
/**
* Delete additional information from a scenario
* @param {string} scenarioUuid - UUID of the scenario
* @param {string} itemUuid - UUID of the item to delete
* @returns {Promise<{status: string}>}
*/
async deleteItem(scenarioUuid, itemUuid) {
this._log("deleteItem", { scenarioUuid, itemUuid });
const response = await fetch(
`/api/v1/scenario/${scenarioUuid}/information/item/${itemUuid}/`,
{
method: "DELETE",
headers: this._buildHeaders(false),
},
);
return this._handleResponse(response, "deleteItem");
},
/**
* Update existing additional information
* Tries PATCH first, falls back to delete+recreate if not supported
* @param {string} scenarioUuid - UUID of the scenario
* @param {Object} item - Item object with uuid, type, and data properties
* @returns {Promise<{status: string, uuid: string}>}
*/
async updateItem(scenarioUuid, item) {
const sanitizedData = this.sanitizePayload(item.data);
this._log("updateItem", {
scenarioUuid,
item: { ...item, data: sanitizedData },
});
const { uuid, type } = item;
// Try PATCH first (preferred method - preserves UUID)
const response = await fetch(
`/api/v1/scenario/${scenarioUuid}/information/item/${uuid}/`,
{
method: "PATCH",
headers: this._buildHeaders(),
body: JSON.stringify(sanitizedData),
},
);
if (response.status === 405) {
// PATCH not supported, fall back to delete+recreate
this._log(
"updateItem",
"PATCH not supported, falling back to delete+recreate",
);
await this.deleteItem(scenarioUuid, uuid);
return await this.createItem(scenarioUuid, type, sanitizedData);
}
return this._handleResponse(response, "updateItem");
},
/**
* Update multiple items sequentially to avoid race conditions
* @param {string} scenarioUuid - UUID of the scenario
* @param {Array<Object>} items - Array of items to update
* @returns {Promise<Array>} Array of results with success status
*/
async updateItems(scenarioUuid, items) {
this._log("updateItems", { scenarioUuid, itemCount: items.length });
const results = [];
for (const item of items) {
try {
const result = await this.updateItem(scenarioUuid, item);
results.push({
success: true,
oldUuid: item.uuid,
newUuid: result.uuid,
});
} catch (error) {
results.push({
success: false,
oldUuid: item.uuid,
error: error.message,
fieldErrors: error.fieldErrors,
isValidationError: error.isValidationError,
});
}
}
const failed = results.filter((r) => !r.success);
if (failed.length > 0) {
// If all failures are validation errors, return all validation errors for display
const validationErrors = failed.filter((f) => f.isValidationError);
if (validationErrors.length === failed.length) {
// All failures are validation errors - return all field errors by item UUID
const allFieldErrors = {};
validationErrors.forEach((ve) => {
allFieldErrors[ve.oldUuid] = ve.fieldErrors || {};
});
const error = new Error(
`${failed.length} item(s) have validation errors. Please correct them.`,
);
error.fieldErrors = allFieldErrors; // Map of UUID -> field errors
error.isValidationError = true;
error.isMultipleErrors = true; // Flag indicating multiple items have errors
throw error;
}
// Multiple failures or mixed errors - show count
throw new Error(
`Failed to update ${failed.length} item(s). Please check the form for errors.`,
);
}
return results;
},
/**
* Create a new scenario with optional additional information
* @param {string} packageUuid - UUID of the package
* @param {Object} payload - Scenario data matching ScenarioCreateSchema
* @param {string} payload.name - Scenario name (required)
* @param {string} payload.description - Scenario description (optional, default: "")
* @param {string} payload.scenario_date - Scenario date (optional, default: "No date")
* @param {string} payload.scenario_type - Scenario type (optional, default: "Not specified")
* @param {Array} payload.additional_information - Array of additional information (optional, default: [])
* @returns {Promise<{uuid, url, name, description, review_status, package}>}
*/
async createScenario(packageUuid, payload) {
this._log("createScenario", { packageUuid, payload });
const response = await fetch(`/api/v1/package/${packageUuid}/scenario/`, {
method: "POST",
headers: this._buildHeaders(),
body: JSON.stringify(payload),
});
return this._handleResponse(response, "createScenario");
},
/**
* Load all available group names
* @returns {Promise<{groups: string[]}>}
*/
async loadGroups() {
this._log("loadGroups", "Starting...");
const response = await fetch("/api/v1/information/groups/");
return this._handleResponse(response, "loadGroups");
},
/**
* Load model definitions for a specific group
* @param {string} groupName - One of 'soil', 'sludge', 'sediment'
* @returns {Promise<Object>} Object with subcategories as keys and arrays of model info
*/
async loadGroupModels(groupName) {
this._log("loadGroupModels", { groupName });
const response = await fetch(`/api/v1/information/groups/${groupName}/`);
return this._handleResponse(response, `loadGroupModels-${groupName}`);
},
/**
* Load model information for multiple groups in parallel
* @param {Array<string>} groupNames - Defaults to ['soil', 'sludge', 'sediment']
* @returns {Promise<Object>} Object with group names as keys
*/
async loadGroupsWithModels(groupNames = ["soil", "sludge", "sediment"]) {
this._log("loadGroupsWithModels", { groupNames });
const results = {};
const promises = groupNames.map(async (groupName) => {
try {
results[groupName] = await this.loadGroupModels(groupName);
} catch (err) {
this._log(`loadGroupsWithModels-${groupName} ERROR`, err);
results[groupName] = {};
}
});
await Promise.all(promises);
return results;
},
/**
* Helper to organize schemas by group based on group model information
* @param {Object} schemas - Full schema map from loadSchemas()
* @param {Object} groupModelsData - Group models data from loadGroupsWithModels()
* @returns {Object} Object with group names as keys and filtered schemas as values
*/
organizeSchemasByGroup(schemas, groupModelsData) {
this._log("organizeSchemasByGroup", {
schemaCount: Object.keys(schemas).length,
groupCount: Object.keys(groupModelsData).length,
});
const organized = {};
for (const groupName in groupModelsData) {
organized[groupName] = {};
const groupData = groupModelsData[groupName];
// Iterate through subcategories in the group
for (const subcategory in groupData) {
for (const model of groupData[subcategory]) {
// Look up schema by lowercase model name
if (schemas[model.name]) {
organized[groupName][model.name] = schemas[model.name];
}
}
}
}
return organized;
},
/**
* Convenience method that loads schemas and organizes them by group in one call
* @param {Array<string>} groupNames - Defaults to ['soil', 'sludge', 'sediment']
* @returns {Promise<{schemas, groupSchemas, groupModels}>}
*/
async loadSchemasWithGroups(groupNames = ["soil", "sludge", "sediment"]) {
this._log("loadSchemasWithGroups", { groupNames });
// Load schemas and all groups in parallel
const [schemas, groupModels] = await Promise.all([
this.loadSchemas(),
this.loadGroupsWithModels(groupNames),
]);
// Organize schemas by group
const groupSchemas = this.organizeSchemasByGroup(schemas, groupModels);
return { schemas, groupSchemas, groupModels };
},
};

View File

@ -361,6 +361,83 @@ function draw(pathway, elem) {
function node_popup(n) { function node_popup(n) {
popupContent = ""; popupContent = "";
if (timeseriesViewEnabled && n.timeseries && n.timeseries.measurements) {
for (var s of n.scenarios) {
popupContent += "<a href='" + s.url + "'>" + s.name + "</a><br>";
}
popupContent += '<div style="width:100%;height:120px"><canvas id="ts-popover-canvas"></canvas></div>';
const tsMeasurements = n.timeseries.measurements;
setTimeout(() => {
const canvas = document.getElementById('ts-popover-canvas');
if (canvas && window.Chart) {
const valid = tsMeasurements
.filter(m => m.timestamp != null && m.value != null)
.map(m => ({ ...m, timestamp: typeof m.timestamp === 'number' ? m.timestamp : new Date(m.timestamp).getTime() }))
.sort((a, b) => a.timestamp - b.timestamp);
const datasets = [];
// Error band (lower + upper with fill between)
const withErrors = valid.filter(m => m.error != null && m.error > 0);
if (withErrors.length > 0) {
datasets.push({
data: withErrors.map(m => ({ x: m.timestamp, y: m.value - m.error })),
borderColor: 'rgba(59,130,246,0.3)',
backgroundColor: 'rgba(59,130,246,0.15)',
pointRadius: 0,
fill: false,
tension: 0.1,
});
datasets.push({
data: withErrors.map(m => ({ x: m.timestamp, y: m.value + m.error })),
borderColor: 'rgba(59,130,246,0.3)',
backgroundColor: 'rgba(59,130,246,0.15)',
pointRadius: 0,
fill: '-1',
tension: 0.1,
});
}
// Main value line
datasets.push({
data: valid.map(m => ({ x: m.timestamp, y: m.value })),
borderColor: 'rgb(59,130,246)',
pointRadius: 0,
tension: 0.1,
fill: false,
});
new Chart(canvas.getContext('2d'), {
type: 'line',
data: { datasets },
options: {
responsive: true,
maintainAspectRatio: false,
plugins: {
legend: { display: false },
tooltip: { enabled: false },
},
scales: {
x: {
type: 'linear',
ticks: { font: { size: 10 } },
title: { display: false },
},
y: {
ticks: { font: { size: 10 } },
title: { display: false },
},
},
},
});
}
}, 0);
return popupContent;
}
if (n.stereo_removed) { if (n.stereo_removed) {
popupContent += "<span class='alert alert-warning alert-soft'>Removed stereochemistry for prediction</span>"; popupContent += "<span class='alert alert-warning alert-soft'>Removed stereochemistry for prediction</span>";
} }
@ -376,6 +453,29 @@ function draw(pathway, elem) {
} }
} }
if (predictedPropertyViewEnabled) {
var tempContent = "";
if (Object.keys(n.predicted_properties).length > 0) {
if ("PepperPrediction" in n.predicted_properties) {
// TODO needs to be generic once we store it as AddInf
for (var s of n.predicted_properties["PepperPrediction"]) {
if (s["mean"] != null) {
tempContent += "<b>DT50 predicted via Pepper:</b> " + s["mean"].toFixed(2) + "<br>"
}
}
}
}
if (tempContent === "") {
tempContent = "<b>No predicted properties for this Node</b><br>";
}
popupContent += tempContent
}
popupContent += "<img src='" + n.image + "'><br>" popupContent += "<img src='" + n.image + "'><br>"
if (n.scenarios.length > 0) { if (n.scenarios.length > 0) {
popupContent += '<b>Half-lives and related scenarios:</b><br>' popupContent += '<b>Half-lives and related scenarios:</b><br>'
@ -393,7 +493,13 @@ function draw(pathway, elem) {
} }
function edge_popup(e) { function edge_popup(e) {
popupContent = "<a href='" + e.url + "'>" + e.name + "</a><br>"; popupContent = "<a href='" + e.url + "'>" + e.name + "</a><br><br>";
if (e.reaction.rules) {
for (var rule of e.reaction.rules) {
popupContent += "Rule <a href='" + rule.url + "'>" + rule.name + "</a><br>";
}
}
if (e.app_domain) { if (e.app_domain) {
adcontent = "<p>"; adcontent = "<p>";
@ -498,7 +604,8 @@ function draw(pathway, elem) {
.enter().append("line") .enter().append("line")
// Check if target is pseudo and draw marker only if not pseudo // Check if target is pseudo and draw marker only if not pseudo
.attr("class", d => d.target.pseudo ? "link_no_arrow" : "link") .attr("class", d => d.target.pseudo ? "link_no_arrow" : "link")
.attr("marker-end", d => d.target.pseudo ? '' : 'url(#arrow)') .attr("marker-end", d => d.target.pseudo ? '' : d.multi_step ? 'url(#doublearrow)' : 'url(#arrow)')
// add element to links array // add element to links array
link.each(function (d) { link.each(function (d) {

View File

@ -0,0 +1,351 @@
/**
* TimeSeriesChart Utility
*
* Provides chart rendering capabilities for time series data with error bounds.
* Uses Chart.js to create interactive and static visualizations.
*
* Usage:
* const chart = window.TimeSeriesChart.create(canvas, {
* measurements: [...],
* xAxisLabel: "Time",
* yAxisLabel: "Concentration",
* xAxisUnit: "days",
* yAxisUnit: "mg/L"
* });
*
* window.TimeSeriesChart.update(chart, newMeasurements, options);
* window.TimeSeriesChart.destroy(chart);
*/
window.TimeSeriesChart = {
// === PUBLIC API ===
/**
* Create an interactive time series chart
*
* @param {HTMLCanvasElement} canvas - Canvas element to render chart on
* @param {Object} options - Chart configuration options
* @param {Array} options.measurements - Array of measurement objects with timestamp, value, error, note
* @param {string} options.xAxisLabel - Label for x-axis (default: "Time")
* @param {string} options.yAxisLabel - Label for y-axis (default: "Value")
* @param {string} options.xAxisUnit - Unit for x-axis (default: "")
* @param {string} options.yAxisUnit - Unit for y-axis (default: "")
* @returns {Chart|null} Chart.js instance or null if creation failed
*/
create(canvas, options = {}) {
if (!this._validateCanvas(canvas)) return null;
if (!window.Chart) {
console.warn("Chart.js is not loaded");
return null;
}
const ctx = canvas.getContext("2d");
if (!ctx) return null;
const chartData = this._transformData(options.measurements || [], options);
if (chartData.datasets.length === 0) {
return null; // No data to display
}
const config = this._buildConfig(chartData, options);
return new Chart(ctx, config);
},
/**
* Update an existing chart with new data
*
* @param {Chart} chartInstance - Chart.js instance to update
* @param {Array} measurements - New measurements array
* @param {Object} options - Chart configuration options
*/
update(chartInstance, measurements, options = {}) {
if (!chartInstance) return;
const chartData = this._transformData(measurements || [], options);
chartInstance.data.datasets = chartData.datasets;
chartInstance.options.scales.x.title.text = chartData.xAxisLabel;
chartInstance.options.scales.y.title.text = chartData.yAxisLabel;
chartInstance.update("none");
},
/**
* Destroy chart instance and cleanup
*
* @param {Chart} chartInstance - Chart.js instance to destroy
*/
destroy(chartInstance) {
if (chartInstance && typeof chartInstance.destroy === "function") {
chartInstance.destroy();
}
},
// === PRIVATE HELPERS ===
/**
* Transform measurements into Chart.js datasets
* @private
*/
_transformData(measurements, options) {
const preparedData = this._prepareData(measurements);
if (preparedData.length === 0) {
return { datasets: [], xAxisLabel: "Time", yAxisLabel: "Value" };
}
const xAxisLabel = options.xAxisLabel || "Time";
const yAxisLabel = options.yAxisLabel || "Value";
const xAxisUnit = options.xAxisUnit || "";
const yAxisUnit = options.yAxisUnit || "";
const datasets = [];
// Error bounds datasets FIRST (if errors exist) - renders as background
const errorDatasets = this._buildErrorDatasets(preparedData);
if (errorDatasets.length > 0) {
datasets.push(...errorDatasets);
}
// Main line dataset LAST - renders on top
datasets.push(this._buildMainDataset(preparedData, yAxisLabel));
return {
datasets: datasets,
xAxisLabel: this._formatAxisLabel(xAxisLabel, xAxisUnit),
yAxisLabel: this._formatAxisLabel(yAxisLabel, yAxisUnit),
};
},
/**
* Prepare and validate measurements data
* @private
*/
_prepareData(measurements) {
return measurements
.filter(
(m) => m.timestamp != null && m.value != null,
)
.map((m) => {
// Normalize timestamp - handle both numeric and date strings
let timestamp;
if (typeof m.timestamp === "number") {
timestamp = m.timestamp;
} else {
timestamp = new Date(m.timestamp).getTime();
}
return {
...m,
timestamp: timestamp,
};
})
.sort((a, b) => a.timestamp - b.timestamp);
},
/**
* Build main line dataset
* @private
*/
_buildMainDataset(validMeasurements, yAxisLabel) {
return {
label: yAxisLabel,
data: validMeasurements.map((m) => ({
x: m.timestamp,
y: m.value,
error: m.error || null,
})),
borderColor: "rgb(59, 130, 246)",
backgroundColor: "rgba(59, 130, 246, 0.1)",
tension: 0.1,
pointRadius: 0, // Hide individual points
pointHoverRadius: 6,
fill: false,
};
},
/**
* Build error bound datasets (upper and lower)
* @private
*/
_buildErrorDatasets(validMeasurements) {
const hasErrors = validMeasurements.some(
(m) => m.error !== null && m.error !== undefined && m.error > 0,
);
if (!hasErrors) return [];
const measurementsWithErrors = validMeasurements.filter(
(m) => m.error !== null && m.error !== undefined && m.error > 0,
);
return [
// Lower error bound - FIRST (bottom layer)
{
label: "Error (lower)",
data: measurementsWithErrors.map((m) => ({
x: m.timestamp,
y: m.value - m.error,
})),
borderColor: "rgba(59, 130, 246, 0.3)",
backgroundColor: "rgba(59, 130, 246, 0.15)",
pointRadius: 0,
fill: false,
tension: 0.1,
},
// Upper error bound - SECOND (fill back to lower)
{
label: "Error (upper)",
data: measurementsWithErrors.map((m) => ({
x: m.timestamp,
y: m.value + m.error,
})),
borderColor: "rgba(59, 130, 246, 0.3)",
backgroundColor: "rgba(59, 130, 246, 0.15)",
pointRadius: 0,
fill: "-1", // Fill back to previous dataset (lower bound)
tension: 0.1,
},
];
},
/**
* Build complete Chart.js configuration
* @private
*/
_buildConfig(chartData, options) {
return {
type: "line",
data: { datasets: chartData.datasets },
options: {
responsive: true,
maintainAspectRatio: false,
interaction: {
intersect: false,
mode: "index",
},
plugins: {
legend: {
display: false,
},
tooltip: this._buildTooltipConfig(
options.xAxisUnit || "",
options.yAxisUnit || "",
),
},
scales: this._buildScalesConfig(
chartData.xAxisLabel,
chartData.yAxisLabel,
),
},
};
},
/**
* Build tooltip configuration with custom callbacks
* @private
*/
_buildTooltipConfig(xAxisUnit, yAxisUnit) {
return {
enabled: true,
callbacks: {
title: (contexts) => {
// Show timestamp
const context = contexts[0];
if (!context) return "Measurement";
const timestamp = context.parsed.x;
return xAxisUnit
? `Time: ${timestamp} ${xAxisUnit}`
: `Time: ${timestamp}`;
},
label: (context) => {
// Show value with unit
try {
const value = context.parsed.y;
if (value === null || value === undefined) {
return `${context.dataset.label || "Value"}: N/A`;
}
const valueStr = yAxisUnit
? `${value} ${yAxisUnit}`
: String(value);
return `${context.dataset.label || "Value"}: ${valueStr}`;
} catch (e) {
console.error("Tooltip label error:", e);
return `${context.dataset.label || "Value"}: ${context.parsed.y ?? "N/A"}`;
}
},
afterLabel: (context) => {
// Show error information
try {
const point = context.raw;
// Main line is now the last dataset (after error bounds if they exist)
const isMainDataset = context.dataset.label &&
!context.dataset.label.startsWith("Error");
if (!point || !isMainDataset) return null;
const lines = [];
// Show error if available
if (
point.error !== null &&
point.error !== undefined &&
point.error > 0
) {
const errorStr = yAxisUnit
? `±${point.error.toFixed(4)} ${yAxisUnit}`
: `±${point.error.toFixed(4)}`;
lines.push(`Error: ${errorStr}`);
}
return lines.length > 0 ? lines : null;
} catch (e) {
console.error("Tooltip afterLabel error:", e);
return null;
}
},
},
};
},
/**
* Build scales configuration
* @private
*/
_buildScalesConfig(xAxisLabel, yAxisLabel) {
return {
x: {
type: "linear",
title: {
display: true,
text: xAxisLabel || "Time",
},
},
y: {
title: {
display: true,
text: yAxisLabel || "Value",
},
},
};
},
/**
* Format axis label with unit
* @private
*/
_formatAxisLabel(label, unit) {
return unit ? `${label} (${unit})` : label;
},
/**
* Validate canvas element
* @private
*/
_validateCanvas(canvas) {
if (!canvas || !(canvas instanceof HTMLCanvasElement)) {
console.warn("Invalid canvas element provided to TimeSeriesChart");
return false;
}
return true;
},
};

View File

@ -1,4 +1,12 @@
{% if meta.can_edit %} {% if meta.can_edit %}
<li>
<a
class="button"
onclick="document.getElementById('edit_scenario_modal').showModal(); return false;"
>
<i class="glyphicon glyphicon-trash"></i> Edit Scenario</a
>
</li>
<li> <li>
<a <a
class="button" class="button"

View File

@ -9,6 +9,39 @@
<input type="hidden" name="job-name" value="batch-predict" /> <input type="hidden" name="job-name" value="batch-predict" />
<fieldset class="flex flex-col gap-4 md:flex-3/4"> <fieldset class="flex flex-col gap-4 md:flex-3/4">
<!-- CSV Upload Section -->
<div class="mb-6 rounded-lg border-2 border-dashed border-base-300 p-6">
<div class="flex flex-col gap-4">
<div
class="flex flex-col gap-3 sm:flex-row sm:items-center sm:justify-between"
>
<div class="flex-1">
<h3 class="text-base font-medium text-base-content mb-1">
Load from CSV
</h3>
<p class="text-sm text-base-content/70">
Upload a CSV file with SMILES and name columns, or insert
manually in the table below
</p>
</div>
<div class="flex-shrink-0">
<input
type="file"
id="csv-file"
accept=".csv,.txt"
class="file-input file-input-bordered file-input-sm w-full sm:w-auto"
/>
</div>
</div>
<div
class="text-xs text-base-content/50 border-t border-base-300 pt-3"
>
<strong>Format:</strong> First column = SMILES, Second column =
Name (headers optional) • Maximum 30 rows
</div>
</div>
</div>
<table class="table table-zebra w-full"> <table class="table table-zebra w-full">
<thead> <thead>
<tr> <tr>
@ -113,10 +146,16 @@
<script> <script>
const tableBody = document.getElementById("smiles-table-body"); const tableBody = document.getElementById("smiles-table-body");
const addRowBtn = document.getElementById("add-row-btn"); const addRowBtn = document.getElementById("add-row-btn");
const csvFileInput = document.getElementById("csv-file");
const form = document.getElementById("smiles-form"); const form = document.getElementById("smiles-form");
const hiddenField = document.getElementById("substrates"); const hiddenField = document.getElementById("substrates");
addRowBtn.addEventListener("click", () => { // Function to create a new table row
function createTableRow(
smilesValue = "",
nameValue = "",
placeholder = true,
) {
const row = document.createElement("tr"); const row = document.createElement("tr");
const tdSmiles = document.createElement("td"); const tdSmiles = document.createElement("td");
@ -125,19 +164,147 @@
const smilesInput = document.createElement("input"); const smilesInput = document.createElement("input");
smilesInput.type = "text"; smilesInput.type = "text";
smilesInput.className = "input input-bordered w-full smiles-input"; smilesInput.className = "input input-bordered w-full smiles-input";
smilesInput.placeholder = "SMILES"; smilesInput.placeholder = placeholder ? "SMILES" : "";
smilesInput.value = smilesValue;
const nameInput = document.createElement("input"); const nameInput = document.createElement("input");
nameInput.type = "text"; nameInput.type = "text";
nameInput.className = "input input-bordered w-full name-input"; nameInput.className = "input input-bordered w-full name-input";
nameInput.placeholder = "Name"; nameInput.placeholder = placeholder ? "Name" : "";
nameInput.value = nameValue;
tdSmiles.appendChild(smilesInput); const smilesLabel = document.createElement("label");
tdName.appendChild(nameInput); smilesLabel.appendChild(smilesInput);
tdSmiles.appendChild(smilesLabel);
const nameLabel = document.createElement("label");
nameLabel.appendChild(nameInput);
tdName.appendChild(nameLabel);
row.appendChild(tdSmiles); row.appendChild(tdSmiles);
row.appendChild(tdName); row.appendChild(tdName);
return row;
}
// Function to clear the table
function clearTable() {
tableBody.innerHTML = "";
}
// Function to populate table from CSV data
function populateTableFromCSV(csvData) {
const lines = csvData.trim().split("\n");
const maxRows = 30;
// Clear existing table
clearTable();
// Skip header row if it looks like headers
const startIndex =
lines.length > 0 &&
(lines[0].toLowerCase().includes("smiles") ||
lines[0].toLowerCase().includes("name"))
? 1
: 0;
let rowCount = 0;
for (let i = startIndex; i < lines.length && rowCount < maxRows; i++) {
const line = lines[i].trim();
if (!line) continue;
// Parse CSV line - split by comma, first part is SMILES, rest is name
const firstCommaIndex = line.indexOf(",");
let smiles = "";
let name = "";
if (firstCommaIndex === -1) {
// No comma found, treat entire line as SMILES
smiles = line.trim().replace(/^"(.*)"$/, "$1");
} else {
// Split at first comma only
smiles = line
.substring(0, firstCommaIndex)
.trim()
.replace(/^"(.*)"$/, "$1");
name = line
.substring(firstCommaIndex + 1)
.trim()
.replace(/^"(.*)"$/, "$1");
}
// Skip empty rows
if (!smiles && !name) continue;
const row = createTableRow(smiles, name, false);
tableBody.appendChild(row);
rowCount++;
}
// Add at least one empty row if no data was loaded
if (rowCount === 0) {
const row = createTableRow();
tableBody.appendChild(row);
}
// Show success message
if (rowCount > 0) {
const message =
rowCount >= maxRows
? `Loaded ${rowCount} rows (maximum reached)`
: `Loaded ${rowCount} rows from CSV`;
// Create temporary success notification
const notification = document.createElement("div");
notification.className = "alert alert-success mb-4";
notification.innerHTML = `
<svg xmlns="http://www.w3.org/2000/svg" class="stroke-current shrink-0 h-6 w-6" fill="none" viewBox="0 0 24 24">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9 12l2 2 4-4m6 2a9 9 0 11-18 0 9 9 0 0118 0z" />
</svg>
<span>${message}</span>
`;
// Insert notification before the table
const tableContainer = document.querySelector("table").parentNode;
tableContainer.insertBefore(
notification,
document.querySelector("table"),
);
// Remove notification after 3 seconds
setTimeout(() => {
if (notification.parentNode) {
notification.parentNode.removeChild(notification);
}
}, 3000);
}
}
// Handle CSV file selection
csvFileInput.addEventListener("change", function (event) {
const file = event.target.files[0];
if (!file) return;
// Check file type
if (!file.name.match(/\.(csv|txt)$/i)) {
alert("Please select a CSV or TXT file.");
return;
}
const reader = new FileReader();
reader.onload = function (e) {
try {
populateTableFromCSV(e.target.result);
} catch (error) {
console.error("Error parsing CSV:", error);
alert("Error parsing CSV file. Please check the file format.");
}
};
reader.readAsText(file);
});
// Handle add row button
addRowBtn.addEventListener("click", () => {
const row = createTableRow();
tableBody.appendChild(row); tableBody.appendChild(row);
}); });
@ -154,7 +321,7 @@
const smiles = smilesInputs[i].value.trim(); const smiles = smilesInputs[i].value.trim();
const name = nameInputs[i]?.value.trim() ?? ""; const name = nameInputs[i]?.value.trim() ?? "";
// Skip emtpy rows // Skip empty rows
if (!smiles && !name) { if (!smiles && !name) {
continue; continue;
} }

View File

@ -1,21 +1,34 @@
{% extends "collections/paginated_base.html" %} {% extends "collections/paginated_base.html" %}
{% load envipytags %}
{% block page_title %}Compounds{% endblock %} {% block page_title %}Compounds{% endblock %}
{% block action_button %} {% block action_button %}
{% if meta.can_edit %} <div class="flex items-center gap-2">
<button {% if meta.can_edit %}
type="button" <button
class="btn btn-primary btn-sm" type="button"
onclick="document.getElementById('new_compound_modal').showModal(); return false;" class="btn btn-primary btn-sm"
> onclick="document.getElementById('new_compound_modal').showModal(); return false;"
New Compound >
</button> New Compound
{% endif %} </button>
{% endif %}
{% epdb_slot_templates "epdb.actions.collections.compound" as action_button_templates %}
{% for tpl in action_button_templates %}
{% include tpl %}
{% endfor %}
</div>
{% endblock action_button %} {% endblock action_button %}
{% block action_modals %} {% block action_modals %}
{% include "modals/collections/new_compound_modal.html" %} {% include "modals/collections/new_compound_modal.html" %}
{% epdb_slot_templates "modals.collections.compound" as action_modals_templates %}
{% for tpl in action_modals_templates %}
{% include tpl %}
{% endfor %}
{% endblock action_modals %} {% endblock action_modals %}
{% block description %} {% block description %}

View File

@ -98,7 +98,7 @@
class="mt-6" class="mt-6"
x-show="activeTab === 'reviewed' && !isEmpty" x-show="activeTab === 'reviewed' && !isEmpty"
x-data="remotePaginatedList({ x-data="remotePaginatedList({
endpoint: '{{ api_endpoint }}?review_status=true', endpoint: '{{ api_endpoint }}?review_status=true{% if entity_type == 'scenario' %}&exclude_related=true{% endif %}',
instanceId: '{{ entity_type }}_reviewed', instanceId: '{{ entity_type }}_reviewed',
isReviewed: true, isReviewed: true,
perPage: {{ per_page|default:50 }} perPage: {{ per_page|default:50 }}
@ -113,7 +113,7 @@
class="mt-6" class="mt-6"
x-show="activeTab === 'unreviewed' && !isEmpty" x-show="activeTab === 'unreviewed' && !isEmpty"
x-data="remotePaginatedList({ x-data="remotePaginatedList({
endpoint: '{{ api_endpoint }}?review_status=false', endpoint: '{{ api_endpoint }}?review_status=false{% if entity_type == 'scenario' %}&exclude_related=true{% endif %}',
instanceId: '{{ entity_type }}_unreviewed', instanceId: '{{ entity_type }}_unreviewed',
isReviewed: false, isReviewed: false,
perPage: {{ per_page|default:50 }} perPage: {{ per_page|default:50 }}

View File

@ -1,7 +1,15 @@
{% extends "collections/paginated_base.html" %} {% extends "collections/paginated_base.html" %}
{% load static %}
{% block page_title %}Scenarios{% endblock %} {% block page_title %}Scenarios{% endblock %}
{% block action_modals %}
{# Load required scripts before modal #}
<script src="{% static 'js/alpine/components/widgets.js' %}"></script>
<script src="{% static 'js/api/additional-information.js' %}"></script>
{% include "modals/collections/new_scenario_modal.html" %}
{% endblock action_modals %}
{% block action_button %} {% block action_button %}
{% if meta.can_edit %} {% if meta.can_edit %}
<button <button
@ -14,10 +22,6 @@
{% endif %} {% endif %}
{% endblock action_button %} {% endblock action_button %}
{% block action_modals %}
{% include "modals/collections/new_scenario_modal.html" %}
{% endblock action_modals %}
{% block description %} {% block description %}
<p> <p>
A scenario contains meta-information that can be attached to other data A scenario contains meta-information that can be attached to other data

View File

@ -0,0 +1,20 @@
{% extends "collections/paginated_base.html" %}
{% block page_title %}Settings{% endblock %}
{% block action_button %}
{% endblock action_button %}
{% block action_modals %}
{% endblock action_modals %}
{% block description %}
<p>A setting includes configuration parameters for pathway predictions.</p>
<a
target="_blank"
href="https://wiki.envipath.org/index.php/Setting"
class="link link-primary"
>
Learn more &gt;&gt;
</a>
{% endblock description %}

View File

@ -5,10 +5,8 @@
<nav> <nav>
<h6 class="footer-title">Services</h6> <h6 class="footer-title">Services</h6>
<a class="link link-hover" href="/predict">Predict</a> <a class="link link-hover" href="/predict">Predict</a>
<a class="link link-hover" href="/batch-predict">Batch Predict</a>
<a class="link link-hover" href="/package">Packages</a> <a class="link link-hover" href="/package">Packages</a>
{# {% if user.is_authenticated %}#}
{# <a class="link link-hover" href="/model">Your Collections</a>#}
{# {% endif %}#}
<a <a
href="https://wiki.envipath.org/" href="https://wiki.envipath.org/"
target="_blank" target="_blank"
@ -19,10 +17,7 @@
{% endif %} {% endif %}
<nav> <nav>
<h6 class="footer-title">Company</h6> <h6 class="footer-title">Company</h6>
<a class="link link-hover" href="/about">About us</a> <a class="link link-hover" href="/about" target="_blank">About us</a>
<a class="link link-hover" href="/contact">Contact us</a>
<a class="link link-hover" href="/careers">Careers</a>
<a class="link link-hover" href="/legal">Legal</a>
</nav> </nav>
<nav> <nav>
<h6 class="footer-title">Legal</h6> <h6 class="footer-title">Legal</h6>

View File

@ -9,6 +9,11 @@
} }
.spinner-slow svg { .spinner-slow svg {
animation: spin-slow 3s linear infinite; animation: spin-slow 3s linear infinite;
width: 100%;
height: 100%;
transform-origin: center;
transform-box: fill-box;
display: block;
} }
</style> </style>
<div class="spinner-slow flex h-full w-full items-center justify-center"> <div class="spinner-slow flex h-full w-full items-center justify-center">

View File

@ -0,0 +1,5 @@
<template x-if="{{ error_var }}">
<div class="alert alert-error mb-4">
<span x-text="{{ error_var }}"></span>
</div>
</template>

View File

@ -0,0 +1,5 @@
<template x-if="{{ loading_var }}">
<div class="flex justify-center items-center p-4">
<span class="loading loading-spinner loading-md"></span>
</div>
</template>

View File

@ -0,0 +1,163 @@
{% load static %}
<div>
<!-- Loading state -->
<template x-if="loading">
<div class="flex items-center justify-center p-4">
<span class="loading loading-spinner loading-md"></span>
</div>
</template>
<!-- Error state -->
<template x-if="error">
<div class="alert alert-error mb-4">
<span x-text="error"></span>
</div>
</template>
<!-- Schema form -->
<template x-if="schema && !loading">
<div class="space-y-4">
<template x-if="attach_object">
<div>
<h4>
<span
class="text-lg font-semibold"
x-text="schema['x-title'] + ' attached to'"
></span>
<a
class="text-lg font-semibold underline text-blue-600 hover:text-blue-800"
:href="attach_object.url"
x-text="attach_object.name"
target="_blank"
></a>
</h4>
</div>
</template>
<!-- Title from schema -->
<template x-if="(schema['x-title'] || schema.title) && !attach_object">
<h4
class="text-lg font-semibold"
x-text="data.name || schema['x-title'] || schema.title"
></h4>
</template>
<!-- Render each field (ordered by ui:order) -->
<template x-for="fieldName in getFieldOrder()" :key="fieldName">
<div>
<!-- Text widget -->
<template
x-if="getWidget(fieldName, schema.properties[fieldName]) === 'text'"
>
<div
x-data="textWidget(fieldName, data, schema, uiSchema, mode, debugErrors, context)"
>
{% include "components/widgets/text_widget.html" %}
</div>
</template>
<!-- Textarea widget -->
<template
x-if="getWidget(fieldName, schema.properties[fieldName]) === 'textarea'"
>
<div
x-data="textareaWidget(fieldName, data, schema, uiSchema, mode, debugErrors, context)"
>
{% include "components/widgets/textarea_widget.html" %}
</div>
</template>
<!-- Number widget -->
<template
x-if="getWidget(fieldName, schema.properties[fieldName]) === 'number'"
>
<div
x-data="numberWidget(fieldName, data, schema, uiSchema, mode, debugErrors, context)"
>
{% include "components/widgets/number_widget.html" %}
</div>
</template>
<!-- Select widget -->
<template
x-if="getWidget(fieldName, schema.properties[fieldName]) === 'select'"
>
<div
x-data="selectWidget(fieldName, data, schema, uiSchema, mode, debugErrors, context)"
>
{% include "components/widgets/select_widget.html" %}
</div>
</template>
<!-- Checkbox widget -->
<template
x-if="getWidget(fieldName, schema.properties[fieldName]) === 'checkbox'"
>
<div
x-data="checkboxWidget(fieldName, data, schema, uiSchema, mode, debugErrors, context)"
>
{% include "components/widgets/checkbox_widget.html" %}
</div>
</template>
<!-- Interval widget -->
<template
x-if="getWidget(fieldName, schema.properties[fieldName]) === 'interval'"
>
<div
x-data="intervalWidget(fieldName, data, schema, uiSchema, mode, debugErrors, context)"
>
{% include "components/widgets/interval_widget.html" %}
</div>
</template>
<!-- PubMed link widget -->
<template
x-if="getWidget(fieldName, schema.properties[fieldName]) === 'pubmed-link'"
>
<div
x-data="pubmedWidget(fieldName, data, schema, uiSchema, mode, debugErrors, context)"
>
{% include "components/widgets/pubmed_link_widget.html" %}
</div>
</template>
<!-- Compound link widget -->
<template
x-if="getWidget(fieldName, schema.properties[fieldName]) === 'compound-link'"
>
<div
x-data="compoundWidget(fieldName, data, schema, uiSchema, mode, debugErrors, context)"
>
{% include "components/widgets/compound_link_widget.html" %}
</div>
</template>
<!-- TimeSeries table widget -->
<template
x-if="getWidget(fieldName, schema.properties[fieldName]) === 'timeseries-table'"
>
<div
x-data="timeseriesTableWidget(fieldName, data, schema, uiSchema, mode, debugErrors, context)"
>
{% include "components/widgets/timeseries_table_widget.html" %}
</div>
</template>
</div>
</template>
<!-- Submit button (only in edit mode with endpoint) -->
<template x-if="mode === 'edit' && endpoint">
<div class="form-control mt-4">
<button class="btn btn-primary" @click="submit()" :disabled="loading">
<template x-if="loading">
<span class="loading loading-spinner loading-sm"></span>
</template>
<span x-text="loading ? 'Submitting...' : 'Submit'"></span>
</button>
</div>
</template>
</div>
</template>
</div>

View File

@ -0,0 +1,53 @@
{# Checkbox widget - pure HTML template #}
<div class="form-control">
<div class="flex flex-col gap-2 sm:flex-row sm:items-baseline">
<!-- Label -->
<label class="label sm:w-48 sm:shrink-0">
<span
class="label-text"
:class="{
'text-error': $store.validationErrors.hasError(fieldName, context),
'text-sm text-base-content/60': isViewMode
}"
x-text="label"
></span>
</label>
<!-- Input column -->
<div class="flex-1">
<!-- Help text -->
<template x-if="helpText">
<div class="label">
<span
class="label-text-alt text-base-content/60"
x-text="helpText"
></span>
</div>
</template>
<!-- View mode -->
<template x-if="isViewMode">
<div class="mt-1">
<span class="text-base" x-text="checked ? 'Yes' : 'No'"></span>
</div>
</template>
<!-- Edit mode -->
<template x-if="isEditMode">
<input type="checkbox" class="checkbox" x-model="checked" />
</template>
<!-- Errors -->
<template x-if="$store.validationErrors.hasError(fieldName, context)">
<div class="label">
<template
x-for="errMsg in $store.validationErrors.getErrors(fieldName, context)"
:key="errMsg"
>
<span class="label-text-alt text-error" x-text="errMsg"></span>
</template>
</div>
</template>
</div>
</div>
</div>

View File

@ -0,0 +1,69 @@
{# Compound link widget - pure HTML template #}
<div class="form-control">
<div class="flex flex-col gap-2 sm:flex-row sm:items-baseline">
<!-- Label -->
<label class="label sm:w-48 sm:shrink-0">
<span
class="label-text"
:class="{
'text-error': $store.validationErrors.hasError(fieldName, context),
'text-sm text-base-content/60': isViewMode
}"
x-text="label"
></span>
</label>
<!-- Input column -->
<div class="flex-1">
<!-- Help text -->
<template x-if="helpText">
<div class="label">
<span
class="label-text-alt text-base-content/60"
x-text="helpText"
></span>
</div>
</template>
<!-- View mode: display as link -->
<template x-if="isViewMode">
<div class="mt-1">
<template x-if="value">
<a
:href="value"
class="link link-primary break-all"
target="_blank"
x-text="value"
></a>
</template>
<template x-if="!value">
<span class="text-base-content/50"></span>
</template>
</div>
</template>
<!-- Edit mode -->
<template x-if="isEditMode">
<input
type="url"
class="input input-bordered w-full"
:class="{ 'input-error': $store.validationErrors.hasError(fieldName, context) }"
placeholder="Compound URL"
x-model="value"
/>
</template>
<!-- Errors -->
<template x-if="$store.validationErrors.hasError(fieldName, context)">
<div class="label">
<template
x-for="errMsg in $store.validationErrors.getErrors(fieldName, context)"
:key="errMsg"
>
<span class="label-text-alt text-error" x-text="errMsg"></span>
</template>
</div>
</template>
</div>
</div>
</div>

View File

@ -0,0 +1,89 @@
{# Interval widget for range inputs - pure HTML template #}
<div class="form-control">
<div class="flex flex-col gap-2 sm:flex-row sm:items-baseline">
<!-- Label -->
<label class="label sm:w-48 sm:shrink-0">
<span
class="label-text"
:class="{
'text-error': hasValidationError || $store.validationErrors.hasError(fieldName, context),
'text-sm text-base-content/60': isViewMode
}"
x-text="label"
></span>
</label>
<!-- Input column -->
<div class="flex-1">
<!-- Help text -->
<template x-if="helpText">
<div class="label">
<span
class="label-text-alt text-base-content/60"
x-text="helpText"
></span>
</div>
</template>
<!-- View mode: formatted range with unit -->
<template x-if="isViewMode">
<div class="mt-1">
<span class="text-base" x-text="start"></span>
<span class="text-base-content/60 text-xs" x-show="!isSameValue"
>to</span
>
<span class="text-base" x-text="end" x-show="!isSameValue"></span>
<template x-if="start && end && unit">
<span class="text-xs" x-text="unit"></span>
</template>
</div>
</template>
<!-- Edit mode: two inputs with shared unit badge -->
<template x-if="isEditMode">
<div class="flex items-center gap-2">
<input
type="number"
class="input input-bordered flex-1"
:class="{ 'input-error': hasValidationError || $store.validationErrors.hasError(fieldName, context) }"
placeholder="Min"
x-model="start"
/>
<span class="text-base-content/60">to</span>
<input
type="number"
class="input input-bordered flex-1"
:class="{ 'input-error': hasValidationError || $store.validationErrors.hasError(fieldName, context) }"
placeholder="Max"
x-model="end"
/>
<template x-if="unit">
<span class="badge badge-ghost badge-lg" x-text="unit"></span>
</template>
</div>
</template>
<!-- Errors -->
<template
x-if="hasValidationError || $store.validationErrors.hasError(fieldName, context)"
>
<div class="label">
<!-- Client-side validation error -->
<template x-if="hasValidationError">
<span class="label-text-alt text-error">
Start value must be less than or equal to end value
</span>
</template>
<!-- Server-side validation errors from store -->
<template
x-for="errMsg in $store.validationErrors.getErrors(fieldName, context)"
:key="errMsg"
>
<span class="label-text-alt text-error" x-text="errMsg"></span>
</template>
</div>
</template>
</div>
</div>
</div>

View File

@ -0,0 +1,69 @@
{# Number input widget - pure HTML template #}
<div class="form-control">
<div class="flex flex-col gap-2 sm:flex-row sm:items-baseline">
<!-- Label -->
<label class="label sm:w-48 sm:shrink-0">
<span
class="label-text"
:class="{
'text-error': $store.validationErrors.hasError(fieldName, context),
'text-sm text-base-content/60': isViewMode
}"
x-text="label"
></span>
</label>
<!-- Input column -->
<div class="flex-1">
<!-- Help text -->
<template x-if="helpText">
<div class="label">
<span
class="label-text-alt text-base-content/60"
x-text="helpText"
></span>
</div>
</template>
<!-- View mode: show value with unit -->
<template x-if="isViewMode">
<div class="mt-1">
<span class="text-base" x-text="value"></span>
<template x-if="value && unit">
<span class="text-xs" x-text="unit"></span>
</template>
</div>
</template>
<!-- Edit mode: input with unit suffix -->
<template x-if="isEditMode">
<div :class="unit ? 'join w-full' : ''">
<input
type="number"
:class="unit ? 'input input-bordered join-item flex-1' : 'input input-bordered w-full'"
class:input-error="$store.validationErrors.hasError(fieldName, context)"
x-model="value"
/>
<template x-if="unit">
<span
class="btn btn-ghost join-item no-animation pointer-events-none"
x-text="unit"
></span>
</template>
</div>
</template>
<!-- Errors -->
<template x-if="$store.validationErrors.hasError(fieldName, context)">
<div class="label">
<template
x-for="errMsg in $store.validationErrors.getErrors(fieldName, context)"
:key="errMsg"
>
<span class="label-text-alt text-error" x-text="errMsg"></span>
</template>
</div>
</template>
</div>
</div>
</div>

View File

@ -0,0 +1,69 @@
{# PubMed link widget - pure HTML template #}
<div class="form-control">
<div class="flex flex-col gap-2 sm:flex-row sm:items-baseline">
<!-- Label -->
<label class="label sm:w-48 sm:shrink-0">
<span
class="label-text"
:class="{
'text-error': $store.validationErrors.hasError(fieldName, context),
'text-sm text-base-content/60': isViewMode
}"
x-text="label"
></span>
</label>
<!-- Input column -->
<div class="flex-1">
<!-- Help text -->
<template x-if="helpText">
<div class="label">
<span
class="label-text-alt text-base-content/60"
x-text="helpText"
></span>
</div>
</template>
<!-- View mode: display as link -->
<template x-if="isViewMode">
<div class="mt-1">
<template x-if="value && pubmedUrl">
<a
:href="pubmedUrl"
class="link link-primary"
target="_blank"
x-text="value"
></a>
</template>
<template x-if="!value">
<span class="text-base-content/50"></span>
</template>
</div>
</template>
<!-- Edit mode -->
<template x-if="isEditMode">
<input
type="text"
class="input input-bordered w-full"
:class="{ 'input-error': $store.validationErrors.hasError(fieldName, context) }"
placeholder="PubMed ID"
x-model="value"
/>
</template>
<!-- Errors -->
<template x-if="$store.validationErrors.hasError(fieldName, context)">
<div class="label">
<template
x-for="errMsg in $store.validationErrors.getErrors(fieldName, context)"
:key="errMsg"
>
<span class="label-text-alt text-error" x-text="errMsg"></span>
</template>
</div>
</template>
</div>
</div>
</div>

View File

@ -0,0 +1,69 @@
{# Select dropdown widget - pure HTML template #}
<div class="form-control">
<div class="flex flex-col gap-2 sm:flex-row sm:items-baseline">
<!-- Label -->
<label class="label sm:w-48 sm:shrink-0">
<span
class="label-text"
:class="{
'text-error': $store.validationErrors.hasError(fieldName, context),
'text-sm text-base-content/60': isViewMode
}"
x-text="label"
></span>
</label>
<!-- Input column -->
<div class="flex-1">
<!-- Help text -->
<template x-if="helpText">
<div class="label">
<span
class="label-text-alt text-base-content/60"
x-text="helpText"
></span>
</div>
</template>
<!-- View mode -->
<template x-if="isViewMode">
<div class="mt-1">
<template x-if="value">
<span class="text-base" x-text="value"></span>
</template>
<template x-if="!value">
<span class="text-base-content/50"></span>
</template>
</div>
</template>
<!-- Edit mode -->
<template x-if="isEditMode">
<select
class="select select-bordered w-full"
:class="{ 'select-error': $store.validationErrors.hasError(fieldName, context) }"
x-model="value"
:multiple="multiple"
>
<option value="" :selected="!value">Select...</option>
<template x-for="opt in options" :key="opt">
<option :value="opt" x-text="opt"></option>
</template>
</select>
</template>
<!-- Errors -->
<template x-if="$store.validationErrors.hasError(fieldName, context)">
<div class="label">
<template
x-for="errMsg in $store.validationErrors.getErrors(fieldName, context)"
:key="errMsg"
>
<span class="label-text-alt text-error" x-text="errMsg"></span>
</template>
</div>
</template>
</div>
</div>
</div>

View File

@ -0,0 +1,63 @@
{# Text input widget - pure HTML template #}
<div class="form-control">
<div class="flex flex-col gap-2 sm:flex-row sm:items-baseline">
<!-- Label -->
<label class="label sm:w-48 sm:shrink-0">
<span
class="label-text"
:class="{
'text-error': $store.validationErrors.hasError(fieldName, context),
'text-sm text-base-content/60': isViewMode
}"
x-text="label"
></span>
</label>
<!-- Input column -->
<div class="flex-1">
<!-- Help text -->
<template x-if="helpText">
<div class="label">
<span
class="label-text-alt text-base-content/60"
x-text="helpText"
></span>
</div>
</template>
<!-- View mode -->
<template x-if="isViewMode">
<div class="mt-1">
<template x-if="value">
<span class="text-base" x-text="value"></span>
</template>
<template x-if="!value">
<span class="text-base-content/50"></span>
</template>
</div>
</template>
<!-- Edit mode -->
<template x-if="isEditMode">
<input
type="text"
class="input input-bordered w-full"
:class="{ 'input-error': $store.validationErrors.hasError(fieldName, context) }"
x-model="value"
/>
</template>
<!-- Errors -->
<template x-if="$store.validationErrors.hasError(fieldName, context)">
<div class="label">
<template
x-for="errMsg in $store.validationErrors.getErrors(fieldName, context)"
:key="errMsg"
>
<span class="label-text-alt text-error" x-text="errMsg"></span>
</template>
</div>
</template>
</div>
</div>
</div>

View File

@ -0,0 +1,62 @@
{# Textarea widget - pure HTML template #}
<div class="form-control">
<div class="flex flex-col gap-2 sm:flex-row sm:items-baseline">
<!-- Label -->
<label class="label sm:w-48 sm:shrink-0">
<span
class="label-text"
:class="{
'text-error': $store.validationErrors.hasError(fieldName, context),
'text-sm text-base-content/60': isViewMode
}"
x-text="label"
></span>
</label>
<!-- Input column -->
<div class="flex-1">
<!-- Help text -->
<template x-if="helpText">
<div class="label">
<span
class="label-text-alt text-base-content/60"
x-text="helpText"
></span>
</div>
</template>
<!-- View mode -->
<template x-if="isViewMode">
<div class="mt-1">
<template x-if="value">
<p class="text-base whitespace-pre-wrap" x-text="value"></p>
</template>
<template x-if="!value">
<span class="text-base-content/50"></span>
</template>
</div>
</template>
<!-- Edit mode -->
<template x-if="isEditMode">
<textarea
class="textarea textarea-bordered w-full"
:class="{ 'textarea-error': $store.validationErrors.hasError(fieldName, context) }"
x-model="value"
></textarea>
</template>
<!-- Errors -->
<template x-if="$store.validationErrors.hasError(fieldName, context)">
<div class="label">
<template
x-for="errMsg in $store.validationErrors.getErrors(fieldName, context)"
:key="errMsg"
>
<span class="label-text-alt text-error" x-text="errMsg"></span>
</template>
</div>
</template>
</div>
</div>
</div>

View File

@ -0,0 +1,234 @@
{# TimeSeries table widget for measurement data #}
<div class="form-control">
<div class="flex flex-col gap-2">
<!-- Label -->
<label class="label">
<span
class="label-text"
:class="{
'text-error': $store.validationErrors.hasError(fieldName, context),
'text-sm text-base-content/60': isViewMode
}"
x-text="label"
></span>
</label>
<!-- Help text -->
<template x-if="helpText">
<div class="label">
<span
class="label-text-alt text-base-content/60"
x-text="helpText"
></span>
</div>
</template>
<!-- Form-level validation errors (root errors) for timeseries -->
<template x-if="$store.validationErrors.hasError('root', context)">
<div class="text-error">
<template
x-for="errMsg in $store.validationErrors.getErrors('root', context)"
:key="errMsg"
>
<span x-text="errMsg"></span>
</template>
</div>
</template>
<!-- View mode: display measurements as chart -->
<template x-if="isViewMode">
<div class="space-y-4">
<!-- Chart container -->
<template x-if="measurements.length > 0">
<div class="w-full">
<div class="h-64 w-full">
<canvas x-ref="chartCanvas"></canvas>
</div>
</div>
</template>
<template x-if="measurements.length === 0">
<div class="text-base-content/60 text-sm italic">No measurements</div>
</template>
<!-- Description and Method metadata -->
<template x-if="description || method">
<div class="space-y-2 text-sm">
<template x-if="description">
<div>
<span class="text-base-content/80 font-semibold"
>Description:</span
>
<p
class="text-base-content/70 mt-1 whitespace-pre-wrap"
x-text="description"
></p>
</div>
</template>
<template x-if="method">
<div>
<span class="text-base-content/80 font-semibold">Method:</span>
<span class="text-base-content/70 ml-2" x-text="method"></span>
</div>
</template>
</div>
</template>
</div>
</template>
<!-- Edit mode: editable table with add/remove controls -->
<template x-if="isEditMode">
<div class="space-y-2">
<!-- Measurements table -->
<div class="overflow-x-auto">
<template x-if="measurements.length > 0">
<table class="table-zebra table-sm table">
<thead>
<tr>
<th>Timestamp</th>
<th>Value</th>
<th>Error</th>
<th>Note</th>
<th class="w-12"></th>
</tr>
</thead>
<tbody>
<template
x-for="(measurement, index) in measurements"
:key="index"
>
<tr>
<td>
<input
type="number"
class="input input-bordered input-sm w-full"
:class="{ 'input-error': $store.validationErrors.hasError(fieldName, context) }"
:value="formatTimestamp(measurement.timestamp)"
@input="updateMeasurement(index, 'timestamp', $event.target.value)"
/>
</td>
<td>
<input
type="number"
class="input input-bordered input-sm w-full"
:class="{ 'input-error': $store.validationErrors.hasError(fieldName, context) }"
placeholder="Value"
:value="measurement.value"
@input="updateMeasurement(index, 'value', $event.target.value)"
/>
</td>
<td>
<input
type="number"
class="input input-bordered input-sm w-full"
placeholder="±"
:value="measurement.error"
@input="updateMeasurement(index, 'error', $event.target.value)"
/>
</td>
<td>
<input
type="text"
class="input input-bordered input-sm w-full"
placeholder="Note"
:value="measurement.note"
@input="updateMeasurement(index, 'note', $event.target.value)"
/>
</td>
<td>
<button
type="button"
class="btn btn-ghost btn-sm btn-circle"
@click="removeMeasurement(index)"
title="Remove measurement"
>
<svg
xmlns="http://www.w3.org/2000/svg"
class="h-4 w-4"
fill="none"
viewBox="0 0 24 24"
stroke="currentColor"
>
<path
stroke-linecap="round"
stroke-linejoin="round"
stroke-width="2"
d="M6 18L18 6M6 6l12 12"
/>
</svg>
</button>
</td>
</tr>
</template>
</tbody>
</table>
</template>
<template x-if="measurements.length === 0">
<div class="text-base-content/60 py-2 text-sm italic">
No measurements yet. Click "Add Measurement" to start.
</div>
</template>
</div>
<!-- Action buttons -->
<div class="flex gap-2">
<button
type="button"
class="btn btn-sm btn-primary"
@click="addMeasurement()"
>
<svg
xmlns="http://www.w3.org/2000/svg"
class="h-4 w-4"
fill="none"
viewBox="0 0 24 24"
stroke="currentColor"
>
<path
stroke-linecap="round"
stroke-linejoin="round"
stroke-width="2"
d="M12 4v16m8-8H4"
/>
</svg>
Add Measurement
</button>
<template x-if="measurements.length > 1">
<button
type="button"
class="btn btn-sm btn-ghost"
@click="sortByTimestamp()"
>
<svg
xmlns="http://www.w3.org/2000/svg"
class="h-4 w-4"
fill="none"
viewBox="0 0 24 24"
stroke="currentColor"
>
<path
stroke-linecap="round"
stroke-linejoin="round"
stroke-width="2"
d="M7 16V4m0 0L3 8m4-4l4 4m6 0v12m0 0l4-4m-4 4l-4-4"
/>
</svg>
Sort by Timestamp
</button>
</template>
</div>
</div>
</template>
<!-- Errors -->
<template x-if="$store.validationErrors.hasError(fieldName, context)">
<div class="label">
<template
x-for="errMsg in $store.validationErrors.getErrors(fieldName, context)"
:key="errMsg"
>
<span class="label-text-alt text-error" x-text="errMsg"></span>
</template>
</div>
</template>
</div>
</div>

View File

@ -21,6 +21,10 @@
type="text/css" type="text/css"
/> />
{# Chart.js - For timeseries charts #}
<script src="https://cdn.jsdelivr.net/npm/chart.js@4.4.0/dist/chart.umd.min.js"></script>
<script src="{% static 'js/utils/timeseries-chart.js' %}"></script>
{# Alpine.js - For reactive components #} {# Alpine.js - For reactive components #}
<script <script
defer defer
@ -30,6 +34,8 @@
<script src="{% static 'js/alpine/search.js' %}"></script> <script src="{% static 'js/alpine/search.js' %}"></script>
<script src="{% static 'js/alpine/pagination.js' %}"></script> <script src="{% static 'js/alpine/pagination.js' %}"></script>
<script src="{% static 'js/alpine/pathway.js' %}"></script> <script src="{% static 'js/alpine/pathway.js' %}"></script>
<script src="{% static 'js/alpine/components/schema-form.js' %}"></script>
<script src="{% static 'js/alpine/components/widgets.js' %}"></script>
{# Font Awesome #} {# Font Awesome #}
<link <link

Some files were not shown because too many files have changed in this diff Show More