How Retailers Can Fix Supplier Item Data at the Source

Eden Shulman

By Eden Shulman, Content Writer

Last Updated April 27, 2026

12 min read

In this article, learn about: 

  • How GDSN and GS1 standards can replace ad hoc item data intake 

  • When compliance programs are the right tool for reinforcing supplier data quality 

  • A practical starting point for retailer teams ready to address item data quality 


Bad item data fails across receiving, inventory, fulfillment, shipping, and the digital shelf,  often simultaneously, and often long after the original error entered the system. The structural forces behind this (supplier variability, fragmented intake, missing internal ownership, constant product change) don't resolve on their own. 

Earlier, we covered why item data has become an infrastructure problem and what the downstream failures actually look like. This article is about what to do about it. The infrastructure to solve supplier item data at scale already exists; the question is whether your organization has built processes, governance, and standards around it. 

Related Reading: Why Supplier Item Data Failures Cascade and What They Cost Retailers

What Standards-Based Item Data Collection Looks Like 

The infrastructure to solve the item data problem already exists. GS1's Global Data Synchronization Network (GDSN) is the standards layer built specifically for this problem. Rather than managing item data through disparate spreadsheets, email threads, and vendor portal submissions, retailers and suppliers connect through GDSN-certified data pools: interoperable hubs where a single connection reaches all trading partners. GS1 describes GDSN as standardized product information transfer and continuous synchronization of product master data: weight, description, brand name, GTIN, manufacturer information. 

The critical word is synchronization. GDSN is a different model entirely, one where item data stays current as products change, validated against global rules before it ever reaches the retailer's database, rather than requiring manual re-entry or a periodic refresh that's already out of date by the time it runs. 

The core attributes that flow through this network cover the full range of what downstream systems actually need: 

  • Identity and branding: GTIN, brand names, manufacturer information, and Global Location Numbers (GLNs) 

  • Physical attributes: Weight, dimensions, and pack configuration at every level of the product hierarchy 

  • Content and description: Product descriptions, ingredients, food allergens, and nutritional data 

  • Regulatory data: Traceability codes and regulatory attributes required for FSMA 204 compliance on Food Traceability List items 

What this means in practice is a shift from a transactional relationship to a systemic one. Under the old model, a supplier fills out a form at onboarding and the retailer hopes the data is accurate and stays current. Under a GDSN-connected model, the supplier maintains their product master in a certified data pool, the retailer subscribes to that data, and changes flow through automatically. 

For retailers, the operational value is an authoritative data source: one version of the truth that every internal system draws from. That is the prerequisite for the kind of automation that requires trusting the item record without manually verifying it first. 

Some retailers already treat GDSN connectivity as a standard business requirement for all suppliers of consumer -packaged goods, precisely because accurate foundational data is no longer optional infrastructure. For those that haven't made that move yet, the gap between what their systems need and what ad hoc intake processes reliably deliver is only going to widen. 

How To Build Item Data Collection Into Core Retail Workflows 

Getting item data right is a retailer process problem. The retailers with the cleanest item masters didn't get there by hoping suppliers would submit accurate data. They built systems that define what's required, validate it early, maintain it over time, and assign someone accountable for the result. 

Define What Data You Actually Need and When 

Before suppliers can give you accurate data, you need to be precise about what you require and at what stage in the workflow you need it. Not every attribute matters equally, and not every attribute is needed at the same moment. 

A tiered requirements model makes this concrete: 

  • Before a purchase order can be issued: GTIN registered in the GS1 Global Registry, basic product hierarchy confirmed, unit of measure and pack configuration present 

  • Before a shipment is accepted: Dimensions, weights, and case pack quantities validated against published data; ASN data matching item master records 

  • Before a product goes live on the digital shelf: Complete content attributes (description, brand, images, ingredients, allergens) and correct Global Product Category (GPC) classification 

It's also worth distinguishing between two operational stages that require different handling. New item publications (i.e., when a product enters the system for the first time) must be structured and submitted correctly before they can enter the setup portal for buyer approval. Initial loads for existing items require synchronization against current internal records to establish a clean baseline. Treating these as the same process creates gaps in both. 

Standardize Your Intake Process 

Ad hoc intake is where data quality degrades. When item data can arrive through a vendor portal, an EDI transaction, an email attachment, or a manually keyed spreadsheet, each channel introduces its own error rate and none of them are validatingagainst the same rules. 

Standardized intake means: 

  • A defined set of required fields with format specifications, not just field names 

  • A single channel of record for initial item setup, even if multiple intake methods feed into it upstream 

  • Automated validation at the point of entry, not a manual review after the fact 

  • Clear, specific error messaging that tells suppliers exactly what's wrong and how to correct it 

For that last point, the Catalogue Item Confirmation (CIC) process is the standardized mechanism. When a supplier publishes item data, the CIC response communicates acceptance or rejection with specific status codes. A correctly built system uses CIC to trigger a "Reject" status with a clear reason, such as a case linked to another case level instead of a unit level. Suppliers know immediately what failed and what to fix, rather than waiting for a manual email from a data integrity team days later. 

Build Validation Earlier in the Workflow 

Most retailers validate item data too late: after onboarding is complete, after the first shipment arrives, or after a downstream system flags an exception. By then, the bad data has already been acted on. 

Move validation upstream. Start with the GS1 Global Registry: Require that items be registered there before they can be synchronized to your systems. This establishes a baseline of legitimacy for the GTIN before any operational data is built on top of it. 

From there, hold items in a received status until all mandatory attributes are correctly published and validated. Don't allow items to enter the new item portal for buyer approval until the data is clean. 

Additional upstream checks worth building in:  

  • Dimension and weight verification before PO issuance for new items 

  • GTIN cross-checks against GS1 registry data 

  • Pack configuration validation before a supplier submits their first ASN  

Each of these catches errors at the point where they're cheapest to fix. 

Create a Maintenance and Change Management Process 

Item data isn't static. Products change and every change that affects the physical product requires updated data across every system that consumes it. 

The operational rule that matters most here: All net content and case pack changes require a new GTIN. This is the mechanism that prevents inventory data from silently degrading when a product's physical characteristics change but its identifier doesn't. If a supplier ships a reformulated case with the old GTIN, the system continues counting units against the old volume definition, and on-hand accuracy erodes without a visible trigger. 

A functional change management process covers: 

  • Advance notification of item changes before they take effect 

  • Required data re-publication tied to any packaging or formulation change 

  • Correct use of “Modify” or “Correction” operations for real-time maintenance, with a full republish as “New” when the system requires it, including a 25-hour clearing window after a deletion before republishing to ensure old records are fully removed 

  • Periodic audits of item master records against physical product to catch drift that change notifications missed 

  • Expiration and archiving logic for discontinued SKUs so the item master doesn't accumulate dead records 

Assign Internal Ownership 

Item data governance requires a named owner. Define which team owns item master integrity. Assign authority for approving item record changes. Establish an escalation path for disputed data.  

The roles that make this function in practice include a standards architect (who owns the data model and requirements), a data synchronization specialist (who manages the technical flow between systems and trading partners), and a master data or data integrity team responsible for ongoing quality and resolution. 

The organizational question — whether data synchronization is an IT function or a business function — has a clear answer: It's a business process that uses technology. It requires executive sponsorship to set standards and enforce them, and it requires operational ownership to maintain them day to day. Without both, even well-designed processes degrade over time as exceptions accumulate and no one has the authority or accountability to resolve them. 

When To Use Compliance and Chargeback Programs To Reinforce Data Quality 

Process and standards get you most of the way there. For some suppliers, they're enough. But for others, the operational cost of submitting bad data isn't visible enough, or urgent enough, to change behavior on its own. That's where compliance programs come in. 

The logic is straightforward. When a supplier submits incorrect dimensions, a missing GTIN, or an incomplete item hierarchy, the retailer absorbs the cost. Compliance programs shift a portion of that cost back to the source. When applied consistently and transparently, they create a financial incentive for suppliers to invest in data quality that process documentation alone doesn't provide. 

The goal is behavior change, not fee revenue. A compliance program that generates chargebacks without generating better data isn't working. 

What Item Data Compliance Typically Covers 

The failures that warrant a compliance response fall into a few clear categories: 

  • Logistical exceptions: Mismatched dimensions and weights are among the most common triggers — cases where what the supplier published doesn't match what physically arrived, requiring manual intervention to receive and slot the product correctly. 

  • Hierarchy and GTIN errors: Automated systems reject publications when the item hierarchy is built incorrectly or when the GTIN doesn't match the retailer's internal records. These are common enough to warrant explicit coverage in a compliance program. 

  • Missing mandatory attributes: Rejections also occur when required fields are absent (net content information, pack configuration, or other attributes the system needs to process the item at all). 

Communicate Requirements Before the First Order 

A compliance program only works if suppliers understand what's expected before they're charged for falling short. Requirements, fee schedules, and the specific triggers that generate a chargeback should be documented clearly and delivered during onboarding. 

Onboarding itself should include a structured data validation gate. Rather than allowing a new supplier to publish their full item catalog immediately, require a pilot publication of five to ten items first. Review those publications for hierarchy errors, missing attributes, and GTIN accuracy before clearing the supplier to proceed. This catches systemic data problems, the kind that would otherwise replicate across hundreds of SKUs, while the stakes are still low and corrections are manageable. 

Build a Dispute Resolution Pathway 

Suppliers need a way to contest chargebacks they believe are wrong. Without a clear dispute process, compliance programs generate friction and erode trading partner relationships without producing better data. A functional dispute path includes a defined window for filing a challenge, a named contact or team responsible for reviewing disputes, and a documented resolution standard so suppliers know what evidence is needed and how long a response takes. 

The "Challenge" process, in which a retailer must manually engage a vendor about a dimension or weight discrepancy, illustrates what happens without this structure: These processes are notoriously slow, ad hoc, and dependent on whoever happens to pick up the email. A formal dispute process makes resolution faster for both sides and creates an audit trail that protects the retailer if a chargeback is contested. 

A Practical Starting Point for Retailer Teams 

Improving item data quality across an entire supplier base is a long-term effort. These are the moves worth making first. 

  • Audit your current item master, including the hierarchy. Pull a sample of active SKUs and check dimensions, weights, GTINs, and pack configurations. More importantly, check the product hierarchy: how many eaches are in an inner pack, how many inner packs are in a case, how cases relate to pallets. Cascading failures often originate from a broken relationship between hierarchy levels. Those errors are easy to miss in a flat data export. 

  • Map which systems consume which attributes and what breaks first. For each data attribute, identify which downstream system depends on it and what the operational consequence is when it's wrong. This map tells you where to prioritize validation effort and which data gaps carry the most risk. It also makes the business case for investment concrete: Wrong case dimensions aren’t an abstract data quality problem when you can point to the WMS function it breaks. 

  • Assess where validation actually happens in your current workflow. Trace item data from the point a supplier submits it to the point it enters your systems. Identify every handoff, every manual step, and every place where bad data could pass through unchecked. Look specifically for where Catalogue Item Confirmation (CIC) messages could be integrated to give suppliers immediate feedback (Received, Review, Synchronized, or Rejected) rather than relying on a manual audit that surfaces errors days or weeks later. 

  • Evaluate GDSN connectivity as your synchronization layer. For high-volume suppliers and frequently changing items, GDSN connectivity is the most durable solution. Before you can begin, confirm that both sides have the identity infrastructure in place: Suppliers need GS1-registered GTINs, and your organization needs Global Location Numbers (GLNs) to subscribe to a supplier's data feed. Without these identifiers, synchronization can't start regardless of which data pool you connect through. 

  • Define internal ownership before enforcing external standards. Assign a named owner for item master integrity, define the roles responsible for data synchronization and standards architecture, and establish an escalation path before you ask suppliers to meet a higher bar. Trying to enforce data quality externally without governance internally produces inconsistent outcomes and erodes supplier trust. 

  • Start with a pilot before opening the gates. The first act of governance with any new supplier relationship should be a limited publication (five to ten items) reviewed for hierarchy accuracy, mandatory attributes, and GTIN validity before the supplier is cleared to publish their full catalog. This is where compliance expectations become real, and where structural data problems surface while they're still cheap to fix. 

Your Suppliers Can’t Fix Data Problems They Can’t See 

When dimensions are wrong, GTINs are missing, or hierarchies are broken, the cost lands on your operation, in receiving exceptions, fulfillment errors, and manual workarounds that shouldn't exist. SPS Commerce helps retailers build the visibility, workflows, and accountability structures that turn supplier execution into a competitive advantage. If item data quality is undermining your supply chain performance, explore how SPS Commerce approaches supplier performance

Related Content