Intelligence

Edge Case Detection in Demand Planning: 18 Scenarios Your Forecast Engine Should Catch

A demand forecast is only as good as the edge cases it handles. Most planning tools give you a number and expect you to notice when it's wrong. Here are 18 real-world scenarios that silently corrupt inventory plans — and how a modern planning engine detects and surfaces them automatically.


Why edge cases matter more than model accuracy

Most conversations about demand planning focus on which statistical model to use: moving average, exponential smoothing, ARIMA, or machine learning. But in practice, the difference between a good forecast and a bad one rarely comes down to the model. It comes down to what happens when the data is messy.

A style that was out of stock for three weeks will show artificially low demand — and a naive model will project that suppressed velocity forward. A viral TikTok moment will show a one-week spike that looks like permanent growth. A style that only sells on markdown will look like a winner if you don't check the AUR.

These aren't theoretical problems. They happen in every single dataset, every single week. The question is whether your planning tool catches them or whether you find out when the inventory arrives.

The goal isn't a perfect forecast. The goal is to know which forecasts you can trust and which ones need human judgment — before you write the PO.

Part 1: Data quality edge cases

These are scenarios where the raw data would mislead any forecast model. The engine needs to detect and filter them before they reach the velocity calculation.

1. Trailing demand dropoff

When a style is winding down — sales declining from 50 units/week to single digits — those weak trailing weeks drag the velocity baseline down if included. A naive model averages them in and under-forecasts the style's true healthy demand.

How we catch it:

The engine walks backwards from the most recent week and identifies sustained trailing dropoffs. If the last N consecutive weeks (default: 3) fall below a percentage of the style's median weekly sales (default: 25%), those tail weeks are excluded from the velocity calculation. Healthy, flat, accelerating, and spiky demand patterns are never penalised — only styles that are genuinely trailing off have their tail trimmed. Optionally, weeks with poor size coverage or below-threshold margins can also be excluded via Settings → Demand Quality Filters.

2. Viral demand spikes

A TikTok feature, a celebrity sighting, or a flash sale can produce a single week that's 5-10x normal. If that spike feeds into the forecast, every reorder recommendation is inflated for months.

How we catch it:

Any week exceeding a configurable threshold (default: 3x median) is flagged as a spike, stripped from the velocity engine, and marked with a visible badge. The planner sees it happened but the forecast stays clean.

3. Anomaly removal with transparency

Removing outliers is dangerous if it's invisible. A planner needs to know what the engine excluded and why — otherwise it's a black box.

How we catch it:

Every anomaly removal is counted and shown on the style card ("2 VIRAL SPIKE STRIPPED"). The detail view marks filtered weeks in the data grid so the planner can verify the engine's judgment.

Part 2: Inventory and fulfillment edge cases

These are scenarios where the forecast number might be right, but the plan breaks because of inventory constraints or operational realities.

4. Lost sales from inventory gaps

If the forecast says 200 units/week but you only have 50 in stock and no receipts coming, those 150 "sales" never happen. If they roll into your revenue forecast or OTB, your plan is fiction.

How we catch it:

Forecast demand is capped by available inventory (BOP + receipts + returns) each week. Lost sales are tracked separately, shown as a red row in the grid, and excluded from revenue rollups and OTB calculations.

5. Safety stock as a timing buffer

Many tools treat safety stock as extra units to add to the order quantity. But safety stock should be a timing mechanism — triggering the reorder earlier so stock never dips below a floor, not inflating the quantity itself.

How we catch it:

The engine defines a safety floor (in units or weeks of seasonalized demand). Stockout is detected when EOP inventory hits that floor — not when it hits zero. The reorder fires earlier, but the order quantity is still based on actual demand over the replenishment horizon.

6. Size scale stockout bias

If size M has been out of stock for 4 weeks while size S had inventory, a simple sales-mix calculation will under-weight M and over-order S. This is how you end up with a warehouse full of XS and no mediums.

How we catch it:

Size scale percentages use In Stock Sales Rate (ISSR) — only weeks where a size had inventory are counted. A size that was out of stock doesn't get penalized; it gets its fair share based on the weeks it could actually sell.

7. Size-aware PO allocation

Even with good size curves, splitting a PO order evenly by percentage ignores the current state. If you're already overstocked in XL and out of M, you need more M and less XL — not the category average.

How we catch it:

A per-size inventory waterfall simulates drawdown over the lead-time window, then calculates per-size order needs as target minus projected inventory. Overstocked sizes get fewer units; sizes running out get priority.

Part 3: Pricing and margin edge cases

Price changes distort demand signals. These scenarios catch situations where velocity looks healthy but the underlying economics are broken.

8. Markdown-only demand & margin floor

A style that only sells when it's 30% off might show strong velocity and healthy WOC. But if you reorder at cost assuming those margins, the math doesn't work. The demand is real — the margin isn't.

How we catch it:

Two layers of protection. First, the Margin Floor setting (Settings → Tags) lets you set a minimum gross margin percentage. Styles below this threshold are tagged LOW MARGIN and automatically excluded from replenishment recommendations — they won't appear in the Reorder tab or receive PO suggestions. Second, the optional Min. Weekly Margin filter (Settings → Demand Quality Filters) excludes individual low-margin weeks from the velocity baseline, so clearance-driven sales don't inflate the demand forecast for styles that are still being replenished.

9. Price elasticity with insufficient data

Computing style-level price elasticity requires meaningful price and volume variation. Without it, the regression returns noise — and a -4.0 elasticity coefficient will make your markdown simulator predict a 10x velocity lift from a 15% discount.

How we catch it:

The engine requires at least 5 weeks, a coefficient of variation >15% on units and >5% on AUR, and a result between -0.5 and -4.0. If any check fails, it falls back to sub-category or category-level elasticity rather than using unreliable style-level data.

Part 4: Assortment and competitive edge cases

These detect problems that span multiple styles — patterns you can't see by looking at one product in isolation.

10. Cannibalization detection

You launch a new hoodie and it sells 80 units/week. Great. But if your existing hoodie dropped from 60/week to 35/week at the same time, you didn't add 80 units of demand — you added 55 and shifted 25 from existing assortment.

How we catch it:

When a style is flagged as NEW (<28 days), the engine compares same-category styles' velocity in the launch window vs. the prior equal window. A >25% drop names the potentially cannibalized styles so you can assess the net impact.

11. Size curve breakage

A style's actual size distribution diverges from the category norm — size S is 40% of inventory when the category average is 18%. This is either intentional (a petite-focused style) or a broken size scale that will create deadstock in the wrong sizes.

How we catch it:

The engine compares each style's current inventory distribution to the category average. Any size deviating by more than 15 percentage points triggers a "Size curve deviation" insight with the specific numbers so the planner can decide if it's intentional.

12. Category OTB breach

You have a $200K OTB budget, but Outerwear is eating 70% of it because unit costs are high. Meanwhile, Tops — your highest-velocity category — is starved for investment.

How we catch it:

The global OTB budget is split by category revenue share. If any category's committed reorder costs exceed its allocation, a warning appears on every style in that category. You see the imbalance before you place the next PO.

Part 5: Forecasting model edge cases

These detect situations where the forecast model itself is likely to be wrong — not because the model is bad, but because the data pattern doesn't fit what the model expects.

13. Demand trend classification

An EWMA forecast is designed for stationary demand — it lags behind trends. A style that's growing 5% per week will be systematically under-forecast, leading to chronic stockouts. A style in decline will be over-forecast, leading to excess.

How we catch it:

For declining styles, the trailing demand cutoff automatically detects when sales drop off — if the most recent N weeks fall below a percentage of the style's median sales, those tail weeks are excluded from the velocity baseline. This prevents the dying tail from dragging down the forecast. For accelerating styles, the configurable lookback window lets planners limit the velocity calculation to the most recent N weeks, naturally dropping the slow early period. Both thresholds are adjustable in Settings → Demand Quality Filters and Data Window.

14. Seasonal mismatch

The engine deseasonalizes demand using a category curve. But if the style's actual sales pattern doesn't match that curve — a "summer" item that actually peaks in spring — the base velocity calculation is corrupted.

How we catch it:

The engine correlates actual sales against the assigned seasonality curve. When correlation drops below 30%, it flags a "Seasonal mismatch" insight and suggests uploading a custom curve. The planner sees the mismatch before it propagates into 26 weeks of forecast.

15. Erratic demand patterns

Some styles don't follow any pattern — they sell 200 one week, 30 the next, 180 the week after. No model will forecast this well. The risk is that the planner trusts the number without knowing it's unreliable.

How we catch it:

When the coefficient of variation exceeds 60%, the style is flagged as "Erratic demand" with a recommendation to widen safety stock or apply manual judgment. The forecast still runs, but the planner knows the confidence is low.

Part 6: Operational edge cases

16. New style with insufficient history

A style with 1-2 weeks of data doesn't have enough signal for a reliable style-level forecast. But planners still need to make buying decisions — especially when a new launch is selling faster than expected and needs immediate replenishment.

How we catch it:

When a style has fewer than 3 weeks of healthy sales data, the engine falls back to category or sub-category average velocity (matching the user's configured calculation level) to generate a forecast. If the style's own short-history velocity is higher than the category average — a hot launch — the engine uses the higher number so the reorder fires immediately. An "Insufficient history" insight appears on the product page showing the velocity source and flagging that recommendations should be reviewed as data arrives. Revenue outlook confidence is weighted at 60% for these styles.

17. Low inventory turns

A style might not be "old" by date, but if it's turning inventory slowly (low COGS relative to average inventory), capital is trapped. Traditional aging metrics require warehouse receipt dates that many brands don't have.

How we catch it:

Inventory turns (COGS / avg inventory at cost) and GMROI (GM$ / avg inventory at cost) are calculated from actual sales data — no receipt dates needed. Configurable period scaling (weekly through annual) lets planners match their reporting cadence.

18. Returns inflating available inventory

Returns add units back to available inventory, which affects EOP, WOC, and reorder timing. Ignoring them overstates demand; double-counting them overstates supply.

How we catch it:

The engine supports per-style return rates from uploaded data, category/sub-category averages, or a manual global percentage. Returns are lagged by 2 weeks from the original sale and added to available inventory in the forecast — properly accounted for without distorting the demand signal.


The compounding effect

Any one of these edge cases, in isolation, might shift a forecast by 10-20%. But they compound. A style that was out of stock (suppressed velocity), had a viral spike (inflated velocity), is only selling on markdown (misleading demand signal), and has a broken size curve (wrong allocation) can produce a reorder recommendation that's 3x too high or too low.

The real cost isn't the software — it's the inventory that sits in a warehouse for 9 months because the forecast missed three edge cases that a human couldn't reasonably catch across 200 styles.

Rule of thumb: If your planning tool doesn't tell you why it's recommending a number, it's asking you to trust a black box. Every recommendation should come with the edge cases it detected, the data it excluded, and the assumptions it made.

How Reactive handles this

Every scenario described above is detected automatically in Reactive SDP's demand engine. When you open a style's detail page, you see the forecast and the diagnostics: which weeks were excluded, what trend the engine detected, whether the seasonality curve fits, and whether the style is cannibalizing a neighbor.

These aren't buried in a settings panel or a separate report. They appear as inline callouts directly below the style's key metrics — ATS, WOC, Base Velocity — so the context is right where you need it when you're making the buy decision.

The goal is simple: don't just give you a number. Give you the reasons to trust it or question it.

See it in action

Upload your sell-through data and see which edge cases Reactive catches in your assortment. Free for 30 days.

Start planning →