Capture and visualization of process, product, quality,
reliability, and environmental data over time
Automated Results and the Data Historian
Automated Results has been working with Data Historians since the 90’s! We’re well experienced with historians from multiple vendors and our team of engineers is trained and accredited. Some examples of the work we perform within various industries are:
- Data Historian Design and Standardization
- Data Point Creation
- Data Interface Integration with Multiple Data Sources
- Asset Oriented Design and Implementation
- Process Analytics
- Historian Visualization Tools
- Business Analytics and KPIs
- Data Lake Integrations
- Data Modeling
- Custom Application Development for Data Historians
What is a Data Historian?
A Data Historian (also known as a Process Historian or Operational Historian)
is a software program that records and retrieves production and process data
by time; it stores the information in a time series database that can efficiently
store data with minimal disk space and fast retrieval. Time series information
is often displayed in a trend or as tabular data over a time range (ex. the last
day, last 8 hours, last year).
There are many uses for a Data Historian in different industries:
- Manufacturing site to record instrument readings
- Process (ex. flow rate, valve position, vessel level, temperature, pressure)
- Production Status (ex. machine up/down, downtime reason tracking)
- Performance Monitoring (ex. units/hour, machine utilization vs. machine capacity, scheduled vs. unscheduled outages)
- Product Genealogy (ex. start/end times, material consumption quantity, lot # tracking, product setpoints and actual values)
- Quality Control (ex. quality readings inline or offline in a lab for compliance to specifications)
- Manufacturing Costing (ex. machine and material costs assignable to a production)
- Utilities (ex. Coal, Hydro, Nucleur, and Wind power plants, transmission, and distribution)
- Data Center to record device performance about the server environment (ex. resource utilization, temperatures, fan speeds), the network infrastructure (ex. router throughput, port status, bandwidth accounting), and applications (ex. health, execution statistics, resource consumption).
- Heavy Equipment monitoring (ex. recording of run hours, instrument and equipment readings for predictive maintenance)
- Racing (ex. environmental and equipment readings for Sail boats, race cars)
- Environmental monitoring (ex. weather, sea level, atmospheric conditions, ground water contamination)
What can you record in a Data Historian?
It will record data over time from one or more locations for the user to analyze. Whether one chooses to analyze a valve, tank level, fan temperature, or even a network bandwidth, the user can evaluate its operation, efficiency, profitability, and setbacks of production. It can record integers (whole numbers), real numbers (floating point with a fraction), bits (on or off), strings (ex. product name), or a selected item from a finite list of values (ex. Off, Low, High).
Some examples of what might be recorded in a data historian include:
Analog Readings: temperature, pressure, flowrates, levels, weights, CPU temperature, mixer speed, fan speed
Digital Readings: valves, limit switches, motors on/off, discrete level sensors
Product Info: product id, batch id, material id, raw material lot id
Quality Info: process and product limits, custom limits
Alarm Info: out of limits signals, return to normal signals
Aggregate data: average, standard deviation, cpk, moving average
Examples where a Data Historian is useful
- Demand-driven manufacturing
- Plant and Building energy management
- Batch, Continuous, and Transitional manufacturing processes
- Corporate and Cloud Data Centers
Advantages of a Data Historian
Off-the-shelf data historians have several advantages over home grown systems and off-the-shelf relational databases:
- Off-the-shelf data acquisition interfaces with control systems, OPC compliant equipment, and intelligent electronic device (IED).
- Data acquisition redundancy, store and forward, and fail-over features
- High speed data collection (ex. 15,000 to 50,000 samples per second) with sub-second time resolution
- Efficient data storage through compression; filtering out of system noise and values that can be computed through interpolation.
- Simple archival storage in blocks of time instead of the monolithic structure of a relational database allows for 20 or 30 years of information to be available online.
- Data Security by role down to individual data point granularity