Hi Allen,
For
Business Central SaaS in high-volume manufacturing/distribution, the biggest performance gains usually come from
workload isolation, telemetry-driven tuning, smarter AL/data access patterns, and aggressive data-volume control—not from classic SQL DBA tactics. In SaaS, you
don’t manage SQL partitioning directly, and database tuning options are more constrained than on-prem; Microsoft’s supported SQL partitioning guidance is specifically
on-premises only. (
learn.microsoft.com)
Short answer
If I had to prioritize, I’d do these first:
- Turn on Application Insights and establish baselines before changing anything. Microsoft explicitly recommends baselining and iterative tuning, and the Performance Toolkit (BCPT) is the right tool to simulate workloads and catch regressions in sandboxes. (learn.microsoft.com)
- Separate read-heavy workloads from write-heavy workloads by using Read Scale-Out for reports, queries, and API pages where possible. In Business Central Online, Read Scale-Out is already available and automatically enabled. (learn.microsoft.com)
- Tune AL/data-access patterns: use the right keys,
SetCurrentKey, partial records/SetLoadFields, queries, FindSet, and careful FlowField/SIFT usage. (learn.microsoft.com)
- Reduce data footprint with retention policies, document cleanup, date compression, external attachment storage, and deleting unused companies. Microsoft explicitly ties smaller data volume to better runtime performance and lower storage pressure. (learn.microsoft.com)
- Use the new database index visibility/management tools if your tenant is on BC 2026 wave 1 or later. Admins can now view/manage indexes per company and review missing-index suggestions inside BC. (learn.microsoft.com)
1) Application-level techniques that matter most
A. Isolate reporting and API reads from posting workloads
In mixed workloads, the classic problem is long-running analytical reads interfering with short OLTP-style writes. Microsoft’s guidance is clear: use
Read Scale-Out to move read-only workloads—reports, queries, API pages—to a read-only replica, so they don’t contend with read/write business processes on the primary database. In Business Central Online, this is already enabled; developers can reinforce it by using
read-only data access intent on reports/query objects, and almost all OData GET requests already use read-only intent. (
learn.microsoft.com)
Practical recommendation
- Put all custom analytical reports, KPIs, and export APIs on read-only intent where business rules allow.
- Keep operational posting, order release, picking, reservations, and cost updates on the primary/write path.
- Don’t let Power BI or reporting tools repeatedly hammer operational endpoints during peak hours; schedule imports or offload reporting datasets.
B. Use telemetry first, tuning second
For SaaS, the best starting point is
Azure Application Insights. Microsoft’s telemetry stack gives you:
- long-running SQL query events,
- database wait statistics,
- database lock timeout events,
- report telemetry,
- extension lifecycle signals,
- web-service load patterns.
Microsoft recommends:
- define “slow,”
- establish a baseline,
- locate the bottleneck,
- eliminate it,
- compare again to baseline. (learn.microsoft.com)
What I’d monitor first
- RT0005 long-running SQL queries,
- report execution time,
- database waits by category,
- lock timeouts,
- extension time on slow pages,
- web service request volume during peak posting windows.
C. Profile pages/reports before blaming “the database”
Microsoft’s current tooling is strong here:
- In-client Performance Profiler for slow user scenarios,
- Page Inspector to see which extensions are affecting a page,
- Scheduled performance profiler for transient collisions,
- AL Profiler for deep developer analysis,
- BCPT for regression/load simulation in sandboxes. (learn.microsoft.com)
In real SaaS environments, a lot of “database slowness” is actually:
- an extension adding expensive page triggers,
- poor AL loops,
- reports loading far more fields than needed,
- or too many web-service requests during peak time. Microsoft explicitly notes that aggressive web-service traffic can make the UI feel slow. (learn.microsoft.com)
2) AL/data-access tuning that usually gives the biggest win
A. Use the right keys and tell BC which key you want
Business Central performance still depends heavily on
keys. Microsoft’s guidance is straightforward: if code is reading subsets of large tables, define and use keys that support the access path, and use
SetCurrentKey to hint the correct sort/order path. If no matching key exists, the query can still run, but slower. (
learn.microsoft.com)
For high-volume manufacturing/distribution, review keys on:
- Item Ledger Entry
- Value Entry
- Warehouse Entry
- Reservation Entry
- Sales Line / Purchase Line
- Prod. Order / Capacity / planning tables
- G/L Entry, Cust./Vendor Ledger Entry
B. Use partial records and SetLoadFields
Partial records are one of the most effective SaaS-safe optimizations for large tables. Microsoft says partial records are implicitly used on report data items, OData pages, list pages, listparts, and table-relation lookups, and developers should also use them in looping code. They reduce the amount of data loaded and can avoid joins to extension data when extension fields aren’t needed. (
learn.microsoft.com)
Practical use
- In loops over ledger/warehouse/line tables, load only the fields you actually need.
- Be careful not to trigger lots of JIT loads by later touching unloaded fields one-by-one. If a field will be needed, add it up front with
AddLoadFields/SetLoadFields. (learn.microsoft.com)
C. Use queries for set-based reads
For reports or APIs over large datasets, Microsoft recommends
AL queries and more set-based loading. In report tuning, they specifically suggest improving the SQL rows / SQL statements ratio and using queries as the data source where appropriate. (
learn.microsoft.com)
D. Be careful with FlowFields / CalcFields / CalcSums
Each
CalcFields /
CalcSums call can require a separate SQL statement unless the same SIFT/filter context is reused from cache. Microsoft also notes each request should stay within one SIFT index when possible.
SetAutoCalcFields can help avoid repetitive per-record calls. (
learn.microsoft.com)
Practical rule
- Avoid per-row
CalcFields in large loops when a set-based alternative exists.
- Use
CalcSums only when the supporting key/SIFT design actually matches the query pattern.
E. Keep SIFT/index usage selective
SIFT speeds sums, but too many SIFT indexes or unnecessary fields in SIFT structures hurt write performance. Microsoft explicitly warns that maintaining more keys/indexes improves read paths but slows inserts/updates/deletes.
For high transaction environments
- Keep only the SIFT/indexes you truly need for recurring operational totals or critical reports.
- Review extension-defined keys—especially on hot write tables.
F. Avoid extra SQL from bad AL patterns
Microsoft calls out several patterns that add unnecessary SQL:
- cloning records before modify/delete,
- inefficient loops,
- repeated calculations,
- and poor key usage. (learn.microsoft.com)
3) Concurrency and peak-load strategies
A. Use tri-state locking behavior to your advantage
Since version 23,
tri-state locking improves concurrency by making AL reads after writes more optimistic (
ReadCommitted instead of old-style update locking), which reduces blocking and failed operations. But if custom code uses
LockTable, you lose that optimistic benefit for that scenario. (
learn.microsoft.com)
Implication
- Review older extensions that still rely heavily on
LockTable.
- Challenge any customization that “solves consistency” by broad locking unless it is truly required.
B. Stagger batch jobs and isolate heavy reports
Microsoft’s performance guidance highlights that transient issues often happen when business processes interfere with each other—such as scheduled jobs overlapping. Use the scheduled performance profiler and telemetry to identify those collisions. (
learn.microsoft.com)
Operational recommendations
- Move large reports to background schedules.
- Separate high-volume batch jobs from peak order/posting windows.
- Avoid running planning, warehouse recalculations, big exports, and finance posting corrections at the same time.
C. Use Database.LockTimeout sparingly for true batch scenarios
Microsoft documents
Database.LockTimeout(Boolean) as a tool specifically for long-running processes that shouldn’t be terminated because of lock timeouts—such as overnight batch jobs. That’s useful, but it should be a targeted exception, not a blanket fix.
4) What you can and cannot do at the “database level” in SaaS
A. Indexing: yes, but through BC/AL and admin tooling—not direct SQL
In SaaS you don’t tune Azure SQL directly. Instead:
- developers influence indexes through table keys in AL,
- and admins can now view/manage indexes per company in BC 2026 wave 1+, including disabling nonessential indexes to improve write performance and storage.
Microsoft also now exposes:
- Database Missing Indexes pages with suggested equality/inequality/include columns and estimated benefit,
- plus table-level visibility into record count, data size, and index size.
Recommended strategy
- Use Table Information to find the biggest tables and index-heavy hotspots.
- Review Database Missing Indexes suggestions.
- Add or adjust keys in AL where the same expensive pattern recurs.
- Consider disabling nonessential indexes if they only help rare queries but slow every write. (learn.microsoft.com)
B. Partitioning: not a SaaS tuning lever
Microsoft’s supported article on
table/index partitioning is explicitly for
on-premises installations. So for Business Central Online, your partitioning strategy is effectively:
don’t rely on SQL partitioning—solve it with workload isolation, key design, retention, archiving, and compression instead. (
learn.microsoft.com)
C. Compression: partly automatic online
Microsoft states that with Business Central Online,
page-level data compression is automatically enabled on tables in a tenant unless a table’s
CompressionType is explicitly set to
None. Reducing stored data volume also improves I/O-heavy workload performance. (
learn.microsoft.com)
5) Data growth: archiving, retention, cleanup, and storage offload
A. Use retention policies for logs and archived tables
Retention policies let you automatically delete outdated data from eligible tables. When enabled, BC creates a job queue entry to apply them, by default daily at
02:00, and Microsoft recommends running them outside business hours. (
learn.microsoft.com)
B. Use the Data Archive extension for exportable historical data
The Data Archive extension lets you archive data out of operational tables and export it to Excel/CSV; archives are chunked in 10,000-record sets. This is useful when the business still wants access to history, but not in the hot operational path. (
learn.microsoft.com)
C. Use date compression on old closed entries
Microsoft says date compression can save space and, in online, even save money. It is specifically suited to older, closed data and can compress finance, warehouse, resource, item budget, and fixed-asset-related entries. It also warns that compression deletes original entries, so it should be planned carefully.
D. Delete unused companies
Microsoft explicitly recommends deleting companies you no longer need—such as old test/demo companies—because reducing database size improves runtime performance and helps avoid size limits online. (
learn.microsoft.com)
E. Offload document attachments to external storage
A very practical newer SaaS feature: Business Central can store document attachments externally in Azure or SharePoint instead of the BC database. Microsoft says this can significantly reduce database size and help avoid storage limits, but you become responsible for backup and access control of the external storage. (
learn.microsoft.com)
In large distribution/manufacturing tenants, this is often low-hanging fruit if users attach lots of PDFs, scans, quality docs, delivery notes, or images.
6) Recommended architecture for your scenario
Given:
- SaaS,
- high transaction volume,
- multiple companies/locations,
- APIs and reporting tools,
I’d recommend this operating model:
Tier 1 – Transaction core
Use BC for:
- order entry,
- inventory movements,
- warehouse execution,
- posting,
- cost and finance transactions.
Tier 2 – Read-heavy workloads
Move these to
read-only intent / read-scale-out wherever possible:
Tier 3 – Historical and auxiliary data
Reduce operational footprint by:
- retention policies,
- date compression,
- data archiving,
- deleting unused companies,
- offloading attachments externally. (learn.microsoft.com)
Tier 4 – Performance governance
Put in place:
- App Insights telemetry,
- BCPT baseline tests in sandbox,
- quarterly table/index review,
- release regression checks for custom extensions. (learn.microsoft.com)
7) A realistic priority plan
Phase 1 – Next 2 weeks
- Enable Application Insights and start collecting telemetry. (learn.microsoft.com)
- Use Table Information to identify your top 10 largest tables and largest index footprints. (learn.microsoft.com)
- Capture your top slow scenarios with the in-client profiler and long-running SQL telemetry. (learn.microsoft.com)
Phase 2 – Next 30 days
- Move custom reports/queries to read-only intent where possible. (learn.microsoft.com)
- Review extension code for:
SetCurrentKey,
SetLoadFields,
- FlowField loops,
- unneeded
LockTable,
- overused SIFT/indexes.
- Stagger job queues and move long reports to background runs. (learn.microsoft.com)
Phase 3 – Next 60–90 days
- Implement retention policies on log/archive tables. (learn.microsoft.com)
- Use date compression for old closed ledger/warehouse data where accounting rules allow.
- Offload document attachments to external storage if attachments are bloating the tenant. (learn.microsoft.com)
- If you’re on BC 2026 wave 1+, review Database Missing Indexes and Manage Indexes per company.
Direct answers to your questions
Q: What are the most effective performance optimization techniques at the application and database level?
For
BC SaaS, the most effective levers are:
- telemetry + baseline testing,
- read-scale-out for reports/APIs,
- AL tuning (
SetCurrentKey, partial records, set-based queries, controlled FlowField usage),
- reducing lock contention (including avoiding unnecessary
LockTable),
- selective key/index tuning through AL and BC’s admin pages,
- reducing database size with retention, archive, compression, and attachment offload. (learn.microsoft.com)
Q: Are there recommended strategies for data archiving, indexing, and partitioning in Business Central SaaS?
- Archiving/retention: yes—use retention policies, Data Archive, document cleanup, date compression, and delete unused companies. (learn.microsoft.com)
- Indexing: yes—but via AL keys and BC admin tooling, not direct SQL tuning. Use Table Information, Manage Indexes, and Database Missing Indexes to guide decisions.
- Partitioning: no as a SaaS-admin tuning strategy; Microsoft’s supported partitioning guidance is on-prem only. (learn.microsoft.com)
If you want, I can turn this into a
practical runbook with:
- the top telemetry/KQL checks to run first,
- a table-by-table review checklist for manufacturing/distribution,
- and a “quick wins vs longer-term fixes” plan for your BC SaaS tenant.