How does the calibration of the eco-score work?

Calibration is the process that defines the performance class ranges (A to E) for a specific product segment and region. 

This ensures that each product's environmental score is meaningful and comparable within its segment. Calibration is required when setting up new segments or regions, or when existing scales need updating.

 

Key steps in calibration

  1. Definition and setup
    Calibration begins by taking a representative sample of products from a specific segment and region. These products are analyzed for their aggregated environmental footprint values. The goal is to convert these values into a performance scale, with defined thresholds for each class (A through E).
  2. Footprint data collection
    The EcoBeautyScore Consortium computes the aggregated footprint values for each product in the sample. This data is used to determine the range of scores.
  3. Class thresholds
    The Consortium sets the following thresholds:
    • Class A (best performance): The lowest 10% of products, based on their environmental footprint, fall into Class A. For example, if 256 products are in the sample, the 26th best product will mark the lower limit of Class A.
    • Class E (worst performance): The highest 10% of products fall into Class E. In the same example, the 26th worst product defines the upper limit of Class E.
  4. Intermediate classes (B, C, D)
    The range between the A and E thresholds is divided into three equal sections to create the thresholds for Classes B, C, and D.

Example: Calibration of 256 Products

  • Class A: The top 10% (best environmental performance) is defined by the 26th best product.
  • Class E: The bottom 10% (worst environmental performance) is defined by the 26th worst product.

Classes B, C, D: The remaining products are divided evenly into three groups based on their environmental footprint values.

External sampling and selection

  • The Consortium uses its internal rules to decide which products should be part of the representative sample. This process is conducted externally from the tool.
  • However, users of the tool can still select products, and those choices are shared with the Consortium to help define representative samples.

Image: Illustration of the computation 

Calibration Process

The EcoBeautyScore Consortium will perform the calibration process for all new product segments or updates using the Consortium module within the platform. This process ensures that the scoring scale accurately reflects the environmental impact of a representative sample of products in each segment and sales region.

  1. Initiation of calibration process
    The calibration process will be initiated upon the Consortium’s request for a specific product segment/region. Users will not have the ability to trigger this process independently. The calibration process requires the aggregated footprint of a pre-selected list of products as representative samples.
  2. Representative sampling by users
    Users on the Consortium platform can designate specific products to be part of the representative sample for calibration within their product segment. This empowers users to contribute data that is reflective of the products in the market.
  3. Verification and removal by the consortium
    The Consortium has the authority to validate or remove products selected by users from the sampling process. It ensures that each product's aggregated footprint meets a specified representativeness threshold before it is included in the calibration 
  4. Calibration execution
    Once the product list is finalized, the Consortium can run the calibration process on the validated samples. This step converts the aggregated footprint values of selected products into a new scoring scale.
  5. Review and approval
    Before a new scoring scale is deployed for broader use on the platform, it must be reviewed and approved by the Consortium. The Consortium will determine if the new scale should be made active for scoring products within the platform.
  6. Unique identifier for calibration scales
    After approval, a new scoring scale is created, with a unique identifier. This scale will then be applied to score all products within the given product segment and sales region.

Data requirements for calibration

To perform the calibration process, the Consortium requires access to the following data for each product:

  • Product segment and sub-segment.
  • Aggregated footprint values, including data representativeness.
  • Sales region metadata.
  • Segment-specific meta-descriptors.

This detailed information allows the Consortium to ensure the calibration is accurate and reflective of the environmental impact across different product categories.

Attention! 

A single product can be involved in multiple calibrations. This may happen if the Consortium recalibrates a product segment or if a product is sold in multiple regions, each requiring separate calibration.

Calibration Management and Data Handling

  1. Unique calibration IDs
    Each calibration process is assigned a unique identifier. The Consortium established a clear naming convention for these processes to streamline audits and user navigation.
  2. Storage and export of calibration data
    The Consortium should be able to store and export all aggregated footprint values for each calibration. The exported data will include:
    • A unique identifier for each product.
    • A hash of inputs used in the calibration (for audit purposes).
    • Meta-descriptors associated with each product

      This export functionality is essential for auditing and validation.
  3. Retention of historical data
    1. Even if a user deletes or modifies a product used in a previous calibration, the Consortium will retain access to the product’s:
      1. Reference information.
      2. Meta-descriptors.
      3. Hash of ingredients.
      4. Aggregated footprint value as of the calibration date.

        This ensures the calibration process remains transparent and traceable, even if changes are made later.
  4. Version Control
    In case of methodology or database changes, the Consortium should be able to refer to the specific version of the methodology used during each calibration.