Tender Analysis
The purpose of this flow is to provide a step‑by‑step guide to integrating the Blockbrain API for automated tender (TED) ingestion, lot evaluation and insight creation.
1. Introduction
The intended use case is to fetch TED notices, split them into LOTs, run an AI evaluation (portfolio fit, quantity, technical complexity), save scored insights to a knowledge base and spreadsheet, and notify stakeholders. The evaluation logic and prompts are configured in the Blockbrain UI; the API is used to automate processing at scale.
All functionality available in the UI is also exposed through the API. The UI should be used to set up the evaluation bot and prompt, while the API should be used to implement automated or large-scale workflows. The complete Blockbrain API documentation is provided here: https://blocky.theblockbrain.ai/docs
Before proceeding with the workflow, ensure you have:
Configured your Tender Evaluation bot.
Defined and saved the evaluation prompt in the Prompt Library and noted the agentTaskId.
Selected the appropriate LLM model in the bot settings.
Configured Google Sheets & Drive credentials (if using spreadsheet output).
Retrieved your
bot_id,agentTaskIdas described in Section 4 of this guide.
2. API Basic Workflow
The process for automated tender analysis consists of multiple main steps. It is strongly recommended to create a separate Data Room (conversation) or Insights Folder per publication/notice to maintain isolation between evaluations and enable parallel processing.
Step 1 – Create Report Spreadsheet
Create a new Google Spreadsheet to store tender evaluation results.
Endpoint: POST https://sheets.googleapis.com/v4/spreadsheets
API Reference: Google Sheets API - Spreadsheets.create
Example request URL
https://sheets.googleapis.com/v4/spreadsheets
Example request body
Example response body (trimmed)
Step 2 – Create Insights Folder
This operation creates a new insights folder in the knowledge base to store generated insights for todays tender batch.
Endpoint: POST /cortex/notes/insight-folder
API Reference: Create Insights Folder – API Documentation
Example request URL
https://blocky.theblockbrain.ai/cortex/notes/insight-folder
Example request body
Example response body
The response will contain a folderId and path that identify this insights folder. These are needed for saving insights in subsequent steps.
Step 3 – Query TED Notices
Fetch tender notices from the TED API filtered by CPV codes and dates.
Endpoint: POST https://tedweb.api.ted.europa.eu/v3/notices/search
API Reference: TED Notices Search – API Documentation
Example request body
Step 4 – Split Notices into LOTs
Split each fetched notice into individual LOT items for per-LOT evaluation.
Handle array and language-object fields reliably (title-lot, description-lot, main-classification-lot).
Output: one item per LOT containing original notice metadata.
Step 5 – Create Data Room & Submit LOT Content
Create a conversation (Data Room) per notice or publication-number and submit LOT content to the evaluation bot.
Create Data Room Endpoint: POST /cortex/active-bot/{bot_id}/convo
API Reference: Create Data Room – API Documentation
Example request URL
https://blocky.theblockbrain.ai/cortex/active-bot/687e4c1b04e932e500e46c52/convo
Example request body
Example response body
Submit LOT Content Endpoint: POST /cortex/completions/v2/user-input
API Reference: Add User Messages – API Documentation
Example request URL
https://blocky.theblockbrain.ai/cortex/completions/v2/user-input
Example request body
If enableStreaming is enabled, responses are returned token by token using Streaming. Concatenate returned new_token objects to form the final output.
Step 6 – Retrieve Evaluation Result
This operation retrieves the evaluation message detail through the messageId obtained as response in the previous step.
Endpoint: GET /cortex/message/{message_id}
API Reference: Get Message Detail – API Documentation
Example request URL
https://blocky.theblockbrain.ai/cortex/message/{message_id}
The evaluation result is the content of the response body, while the original submitted LOT content is targetText.
Alternatively, retrieve the complete message list
This operation retrieves the complete list of messages for a given conversation (convoId).
Endpoint: POST /cortex/message/list
API Reference: Get Message List – API Documentation
Example request URL
https://blocky.theblockbrain.ai/cortex/message/list
Example request body
Step 7 – Scoring & Evaluation Logic
Compute scores for each LOT using the evaluation bot plus deterministic rules.
Implementation notes: extract CPV prefixes (first 5 digits), detect quantities via fields or regex in title/description, compute complexity from multiple flag fields.
Step 8 – Filter Relevant Results
Keep only LOTs meeting the relevance threshold (configurable).
Filter nodes (Only Relevant > 5 Score) and business-rule nodes to remove auto-PASS items.
Step 9 – Create Insight from Tender HTML & Save to Knowledge Base
Download tender HTML, convert to Markdown, create an insight note, generate a shareable link and save the insight to the knowledge base.
Download tender HTML: use
notice.links.htmlDirect.DEU(fallback to ENG).Convert HTML → Markdown
Create insight note
Endpoint: POST /cortex/notes/add-note
API Reference: Add Note – API Documentation
Example request URL
https://blocky.theblockbrain.ai/cortex/notes/add-noteExample request body
Example response body (trimmed)
Generate shareable link
Endpoint: POST /cortex/notes/share/generate-link
API Reference: Generate Share Link – API Documentation
Example request URL
https://blocky.theblockbrain.ai/cortex/notes/share/generate-linkExample request body
Example response body (trimmed)
Save insight to knowledge base collection
Endpoint: POST /cortex/notes/save-insights-to-knowledge-base
API Reference: Save Insights – API Documentation
Example request URL
https://blocky.theblockbrain.ai/cortex/notes/save-insights-to-knowledge-baseExample request body
Example response body (trimmed)
Step 10 – Append Evaluation to Spreadsheet
Map evaluation fields to Google Sheets columns and append one row per LOT.
Typical columns:
ABC-Rating
Score
Title-DE
HTML-Link-DE
Tender-Nr
Lot
Recommendation
CPV-Code
HTML-Link-EN
Blockbrain Chat Link
Step 11 – Clean Up Data Rooms (Optional)
Delete or archive conversations and temporary data rooms after processing to free resources.
Endpoint: DELETE /cortex/conversation/{convoId}
Example request URL
https://blocky.theblockbrain.ai/cortex/conversation/68de93f1fc203c550a278b09
Best Practices
Create one Insights Folder or Data Room per publication-number to isolate context.
Store evaluation prompts in the Prompt Library and reference via
agentTaskId.Generate a sessionId (UUID) per processing run for traceability.
Keep scoring thresholds and weights configurable.
Persist raw notice payload and evaluation metadata for audit and re‑scoring.
Last updated
Was this helpful?