Fundamentals of Measurement | clar all concept| clar all concept

Measurement involves assigning a number to a physical quantity. Examples include length, mass, time, or temperature. This is done in a system of standardized units. Measurement is utilized in science, engineering, industry, and everyday life to achieve consistency, precision, and repeatability.

1. Purpose of measurement & significance of measurement

A. Purpose of Measurement

1. Quantification

  • Quantitative measurements, for example length, mass and temperature, are assigned to physical characteristics.
  • Facilitates goal comparison in place of subjective evaluation.

2. Standardization

  • It guarantees uniformity by using internationally recognized units such as those prescribed by the International System of Units (SI).
  • It facilitates the international exchange, communication, and cooperation.

3. Quality Control

  • Ensures that products conform to all specifications (e.g., manufacturing tolerance).
  • Guarantees safety and reliability (structure in construction).

4. Scientific Research

  • It enables us to test the hypothesis with the help of empirical data.
  • It supports discoveries in physics, chemistry, medicine, engineering, and so on.

5. Decision-Making

  • It gives data for analysis (e.g. economic trends, medical diagnoses).
  • Helps in policy-making (e.g., environmental monitoring).

6. Process Improvement

  • Improves efficiency in industries (temperature control in chemical reactions).
  • Precise measurements reduces waste and cost.

2. Definition & brief explanations of :-

2.1 Range

Definition of Range
Range in measurement is the difference between the highest and lowest values that an instrument can measure. It is also the range covered by a set of data.

For Instruments: The range is the smallest to largest measurable values (e.g., a thermometer with a range of 120°C from -10°C to 110°C).

For Data Sets: The range is the difference between the largest and smallest observed values. For example, in the set 5, 8, 3, and 10, the range is 10 âˆ’ 3 = 7.

Key Types of Range

  • 1. Instrument Range (Measuring Range)
  • 2. Effective Range (Optimal Operating Range)
  • 3. Dynamic Range
  • 4. Statistical Range

2.2 sensitivity

Definition  of Sensitivity
Sensitivity is the smallest variation of a measured parameter that an instrument will detect and record. The reflection of the device’s ability to reproduce a physical stimuli  entry (alteration) to the corresponding signal output (quantifiable measurement).

Key Aspects of Sensitivity

  • Detection Threshold
  • Responsiveness
  • Signal-to-Noise Ratio 

Importance of Sensitivity

  • Scientific Research
  • Medical Diagnostics:
  • Industrial Control:

2.3 Definition & brief explanations of

a. True Value Definition: The determination of a quantity with no measurement uncertainty. This occurs in the total absence of measurement uncertainty.

Key Points:

  • Denotes that physical reality itself is unbiased.
  • Does not vary for a particular measurement
  • In practice it is not possible with absolute precision.

b. Indicated Value Definition: Output indicated by a measuring instrument at the time of measurement.

Key Points:

  • The value observed or recorded from the instrument
  • Is marginally deviated from the actual value.
  • The basis for all practical measurements and decisions

Main Differences

AspectTrue ValueIndicated Value
NatureIdeal, theoreticalPractical, observed
DeterminationUnknown exactlyRead directly from instrument
VariabilityFixed (for a given system)Changes with measurement conditions
PurposeReference standardWorking measurement

2.4 Errors (including limiting errors)

a. Types :

  • Errors Associated with Instruments (such as Zero Error and Calibration Drift)
  • Environmental Influences (such as the Impact of Temperature and Humidity)
  • Observation (such as parallax error in analog measuring instruments)

b.  Random Errors

  • Variations that can’t be predicted due to noise, interference, or human influences.
  • Techniques of Reduction: Take multiple measurements and apply statistical averaging  techniques.

c. including limiting errors

  • It is specified by the manufacturers as the instrument’s allowable highest error,  commonly referred to as its tolerance.
  • It is presented as a proportion of the total scale reading.

Example:

  • A voltmeter has a range of 0–100 V with a Â±2% limiting error.
  • Absolute Error = ±2% of 100 V = Â±2 V.
  • If the measured value is 50 V, the relative error is: Relative Error=Absolute Error/Measured Value = 2/ 50 =4%
  • Conclusion: Using an instrument near its full scale reduces relative error.

2.5 Resolutions

The smallest discernible variation in the entry quantity that an instrument is capable of measuring reliably is known as resolution. It defines the measurement precision of  the degree. It plays an important role in the choice of instruments for use in applications requiring high accuracy.

a. Definition & Importance

  • Resolution = Smallest increment an instrument can show or detect.
  • Higher Resolution â†’ Finer measurements (more decimal places).
  • Lower Resolution â†’ Coarser measurements (rounded values).

b.  Types of Resolution

i.  Digital Resolution

  • The least significant digit of a digital readout that indicates the minimal increment.
  • formula : Resolution =Full Scale Range/number of show counts

ii.  Analog Resolution

  • The smallest detectable variation observable on an analog, needle-based instrument.
  • Such as subject to scale graduations and the limitations of human perception  (parallax error).

iii.  Sensor Resolution

  • The smallest variation that a sensor, for example, an encoder or a strain gauge, can detect.

2.6 Accuracy

Accuracy is the closeness of a measured value to that of a true value. It is a crucial parameter to use in assessing the reliability of an instrument or measurement system.

a. Definition & Importance

  • Accuracy: Accuracy is defined as degree to which a measured value corresponds to the  true or  reference value.
  • High accuracy: Accuracy is high if the measured value is close to the true value.
  • Low accuracy: It produces a significant deviation from the actual value when the  accuracy is low.

b. Types of Accuracy

i.Absolute Accuracy

  • Amount of deviation observed relating to a recognized standard. formula
  • Formula: Absolute Accuracy=∣Measured Value−True Value∣

ii. Relative Accuracy (Percentage Accuracy)

  • It is presented as a percentage of the actual value or the full scale measurement.
  • Formula: Relative Accuracy (%)= (Absolute Error​/True Value)×100%

2.7 Precision and instrument efficiency.

a. Precision in Measurement

Consistency or repeatability of measurements obtained when the same parameter is measured repeatedly under similar conditions is called precision. Precision, unlike accuracy, is concerned only with measurement measurements being close to one another, not to the true value.

Key Concepts:

  • Precision: Often measures close to a topic line, with little variation between results.
  • High Variability: The observations have a significant dispersion, implying high variability.

b. Instrument Efficiency

The instrument efficiency is defined as the amount of energy entry that turns into functional output (e.g. electric signal, screen reading), and at the same time minimizes energy losses. This characteristic is important for power sensitive systems such as portable sensors and battery powered devices.

Key Concepts:

  • The proportion of effective output power relative to the incoming power.
  • For instance, a transducer operating at 90% efficiency dissipates merely 10% of the  entry energy as waste.

3. Classification of instrument systems :-

Various criteria can be used to classify instrument systems. These include a role, mode of operation, type of signals, and intended applications. The next presents an organized classification scheme:

3.1 Null and deflection type instruments

a. Null-Type Instruments

i. Principle: These devices work by comparing an unknown quantity with a known reference value. The measurement is adjusted to reach a state of equilibrium or ‘null condition’.

ii. Key Characteristics:

  • Equilibrium is attained when there is no deflection, and measurements are made at that point.
  • It has high precision and sensitivity as equilibrium is reached without energy being extracted from the  source under test.
  • To achieve a null condition, manual or automated adjustment is necessary.

iii. Examples: Wheatstone Bridge , Potentiometer, Dead-Weight Tester.

iv. advantage/disadvantage

advantages: ✔ High accuracy (minimal error due to null condition).
✔ No loading effect (since no current flows at balance).

disadvantages: ✖ Slower measurement (requires balancing).
✖ Requires skilled operation.

b. Deflection-Type Instruments

i.Principle: These devices measure an unknown value. The entry quantity is determined by the degree of displacement of a pointer or sensor using these devices.

ii. Key Characteristics:

  • Measures the value directly, rather than presenting the value with deflection, such as needle displacement  or a digital screen.
  • Operation has been found to be more straightforward and rapid than null-type instruments.
  • The source being measured they be taken from cause loading errors. This is because energy is being extracted from it.

iii. Examples: Analog Ammeter/Voltmeter, Bourbon Tube Pressure Gauge, Digital Millimeter (D M M).

iv. advantage/disadvantage

Advantages:

✔ Fast and easy to use.
✔ Suitable for dynamic measurements (e.g., varying signals).

Disadvantages:

✖ Lower accuracy than null-type (due to loading effects).
✖ Calibration drift over time.

3.2 Absolute and secondary instruments

a. absolute instrument

Definition: Absolute instruments are devices that yield direct measurements of physical quantities with no need for standard of calibration. The measured quantity value is obtained using these instruments as a role of units or physical constants.

Examples:

  1. Tangent Galvanometer
  2. Rayleigh Current Balance
  3. Absolute Spectrometer

Advantages:
✔ No calibration needed (self-contained).
✔ High accuracy (used as primary standards).

Disadvantages:
✖ Complex and slow (need careful setup).
✖ Not practical for field use (mostly lab-based).

b.Secondary Instruments

Definition: Calibration of secondary instruments is also required with respect to absolute instruments or established standards. They are indirect instruments and are widely used in industrial as well as routine applications.

Examples:

  1. Ordinary Ammeter/Voltmeter
  2. Thermocouple Thermometer
  3. Bordon Tube Pressure Gauge

Advantages:

✔ Simple and portable (easy to use in the field).
✔ Faster measurements than absolute instruments.

Disadvantages:

✖ Accuracy depends on calibration (drift over time).
✖ Prone to errors (wear and tear, environmental effects).

3.3 Analog and digital instruments

a. Analog Instruments

Definition: Continuous measurements from analog instruments are displayed by a moving pointer. They can also be shown by a scale or a corresponding output signal. This output is proportional to the parameter being measured.

Examples:

  1. Analog Voltmeter/Ammeter
  2. Bordon Tube Pressure Gauge
  3. Analog Thermometer

Advantages:

✔ Simple and robust 
✔ No power needed 
✔ Good for observing trends 

Disadvantages:

✖ Lower accuracy 
✖ Slower response 
✖ No data logging 

b.Digital Instruments

Definition:Digital instruments convert analog signals into discrete numerical values electronically, measure and present data in digital form.

Examples:

  1. Digital Millimeter
  2. Smart Temperature Sensor
  3. Oscilloscope

Advantages:

✔ High accuracy and resolution 
✔ Fast response and automation 
✔ Minimal reading errors 

Disadvantages:

✖ Need power 
✖ More complex circuitry 

3.4 Static and dynamic characteristics, types of errors

 a. Static Characteristics

(Performance for constant or slowly varying measurements)

  • Accuracy
  • Precision (Repeatability)
  • Resolution
  • Sensitivity
  • Linearity
  • Hysteresis
  • Threshold & Headband

b.  Dynamic Characteristics

(Behavior with time-varying inputs)

  • First-order System
  • Second-order System
  • Frequency Response
  • Dynamic Error

c. types of errors

  1. Systematic Errors
  2.  Random Errors
  3. Gross Errors
  4. Dynamic Errors
  5. . Error Propagation Analysis
  6. Modern Error Reduction Techniques Hardware Solutions

4.Calibration of instruments: Necessity and procedure

a. Necessity of Calibration (Why calibration is essential for measurement systems)

  • Accuracy Assurance
  • Regulatory Compliance
  • Process Reliability
  • Financial Impact
  • Safety Considerations

b. Calibration Procedure (Step-by-step method for proper calibration)

Phase 1: Pee-Calibration Preparation

  • Document Review
  • Environmental Stabilization
  • Instrument Conditioning

Phase 2: Calibration Execution

  • Basic Checks
  • Measurement Cycle
  • Data Recording

Phase 3: Post-Calibration

  • Uncertainty Analysis
  • Adjustment/Correction
  • Documentation

c. Modern Calibration Technologies

  • Automated Calibration Systems
  • Digital Calibration Certificates
  • AI-Assisted Calibration
  • Portable Calibration Standards

5. Classification of measuring instruments :- Indicating, Recording and Integrating instruments.

Measuring instruments can be broadly classified into three main categories based on their role and output presentation:

 a. Indicating Instruments

Characteristics:

  • Give real-time measurement readings
  • No permanent record of measurements
  • Simple construction and operation

Examples:

  • Analog Indicating Instruments
  • Digital Indicating Instruments

Advantages:
✔ Immediate visual feedback
✔ Simple to use
✔ Lower cost compared to recording types

Disadvantages:
✖ No historical data storage
✖ Requires operator presence for monitoring

b. Recording Instruments

Characteristics:

  • Record measurements vs. time
  • Give historical data for analysis
  • Incorporate indicating functions

Types:

  1. Analog Recorders
  2. Digital Recorders

Advantages:
✔ Permanent record for analysis
✔ Trend visualization
✔ Compliance documentation

Disadvantages:
✖ More complex than indicating instruments
✖ Higher maintenance requirements

c.  Integrating Instruments

Characteristics:

  • Measure cumulative effect rather than instantaneous value
  • Execute time integration of measured quantity
  • Often include vitalizing displays

Examples:

  1. Electrical Energy Meters
  2. Flow Measurement
  1. Other Types:
    • Radiation dosimeters
    • Exposure meters (light, sound)

Measurement Principle:

Total Quantity=∫t1t2Instantaneous Valued

Advantages:
✔ Measures total consumption/usage
✔ Essential for billing applications
✔ Long-term monitoring ability

Disadvantages:
✖ Requires periodic reading/reset
✖ Need pulse output for automation

Comparison Table

FeatureIndicatingRecordingIntegrating
OutputInstantaneous valueTime-history recordCumulative total
Data StorageNoneContinuousIncremental
ComplexitySimpleModerateModerate-high
CostLowMediumMedium-high
MaintenanceLowMediumMedium
Typical UseQuick checksProcess monitoringBilling/accounting

Leave a Comment

Your email address will not be published. Required fields are marked *