Flow for Securing Data

  1. A scheduled workflow is created to run at specific times. This workflow triggers an alert. The alert body contains a comma-delimited list of trading entity IDs.
  2. The Heritage Alerts Listener (HAL) Windows service receives the alert. It fires off the extract.
  3. The extract writes a new record to the ExtractRequest table of the reporting database with the details of the request, the extract type “Brain”, and the status “Start”. It then runs the five different queries and writes the results to the relevant tables – see below table for details. All data in that table is deleted first, as there is only ever one set of data. While it is reading the data, it updates the status on the ExtractRequest table to be “Read ‘n’” where ‘n’ is the sequence of the extract. While it is writing the data, it updates the status on the ExtractRequest table to be “Write ‘n’”, and when it is complete, it updates the status to be “Done”.

Sequence

Extract

Staging Table

1

Open Contracts

BrainOpenContracts

2

Purchase and Sales

BrainPurchAndSale

3

Booking

BrainBooking

4

Booking

BrainBookingCosts

5

Inventory

BrainInventory

6

Shipment

BrainShipment

 

Endpoints

There are five endpoints, one for each extract. They are:

  1. [host]/BrainOpenContracts(tradingentityId=’br’)
  2. [host[/BrainPurchasesAndSales(tradingentityId=’br’)
  3. [host]/BrainBookings(tradingentityId=’br’)
  4. [host[/BrainInventory(tradingentityId=’br’)
  5. [host]/BrainShipments(tradingentityId=’br’)

“Host” is the web server, using port 4760 for http and 5760 for https.

Extracted Data

Open Contracts

Read all contracts from current and archived.

Purchases and Sales

Read all contracts from current and archived.

Bookings

Read all contracts from current and archived. We also retrieve all costs that are included in P&L, or that are special container costs with the hard-coded code “40FTCOHC”.

Inventory

Read all inventory data.

Shipments

Read all inventory data.

Troubleshooting

  1. Check that the scheduled workflow is running. This can be done in Traders Desktop in the ITAS Workflow Console screen. Start the workflow if necessary.
  2. Check that the HAL Windows service is running. Start the service if necessary.
  3. Look at the HAL log in the ITAS folder to see if it is being triggered. If it isn’t being triggered, this suggests there are problems with alerts.
  4. Look in the staging database in the ExtractRequest table to see if requests are being made. If there are no entries, it is possible that the extract cannot communicate with the database. If the entries show that the extract hasn’t completed, this suggests it has crashed.
  5. If all the data is correct, but the endpoints aren’t returning correct data, check that the connection string for the staging database in the ODATA service is correct.
  6. Check that the ODATA web service is running in IIS.

 



Manual Trigger an extract using the ITAS API Events/Publish endpoint, passing in “GF” as trading entity ID, and the new alert ID as the topic. An example message body is:

{
    "TID": "TID-002",
    "InnerBody": {
        "Title": "Supply Brain Trigger",
        "Message": "Now Trigger",
        "AdditionalParameters": [
        {
            "Key": "TradingEntityIds",
            "Value": "BR, RG"

        }
    ]}


Was this helpful?
Thanks for your feedback

Recently viewed