Artificial intelligence has moved rapidly from experimental technology to strategic necessity. Organizations across industries are deploying AI at scale – subscribing to enterprise AI tools, building custom chatbots and analytics platforms, and embedding AI capabilities directly into products delivered to customers. These investments are substantial, and their accounting treatment is consequential.
The encouraging starting point is finance leaders don’t need to wait for AI-specific accounting guidance. The principles governing software development costs under U.S. GAAP – primarily Accounting Standards Codification (ASC) 350-40, “Internal-Use Software”; ASC 985-20, “Costs of Software To Be Sold, Leased, or Marketed”; and ASC 730, “Research and Development” – provide the relevant frameworks. The challenge is applying these frameworks to AI, which shares the iterative, nonlinear character of much modern software development but introduces AI-specific complexity: Training involves massive datasets, specialized computer infrastructure, and technological uncertainty about whether a minimum viable model can be achieved. Proof-of-concept work and production development often blur together and make it difficult to identify stages of development. In addition, the performance targets themselves evolve as the model is refined.
Against the backdrop of modern, iterative software development practices, including but not limited to AI development, the Financial Accounting Standards Board (FASB) issued Accounting Standards Update (ASU) 2025-06, “Intangibles – Goodwill and Other – Internal-Use Software (Subtopic 350-40): Targeted Improvements to the Accounting for Internal-Use Software,” in September 2025. The ASU eliminates the rigid three-stage framework in Subtopic 350-40 that many entities found difficult to align to contemporary software development practices and replaces it with a principles-based model centered on when significant development uncertainty is resolved. Whether and how that model improves outcomes for AI projects specifically – where development uncertainty often is both substantial and persistent – will depend on how entities interpret and apply its core concepts. Finance leaders with significant AI spending on the horizon should understand the model now.
Before any discussion of capitalization, every AI initiative must work through some fundamental classification questions: Is the organization purchasing or building its own AI software? What is the software for? How will it be delivered? The answers determine which accounting model governs, and the differences in financial statement outcome can be dramatic.
In determining the accounting for AI, an organization first must consider whether it is purchasing AI capabilities or building them. Organizations subscribing to (purchasing) enterprise AI tools generally will find themselves in a hosting arrangement under ASC 350-40, where the subscription fee is expensed as incurred and implementation costs are separately evaluated for capitalization. This is the conceptually simpler path, but still it requires meaningful judgment to distinguish capitalizable configuration and integration work from expensed training and general setup activities.
For organizations building AI internally, the relevant accounting model depends on two factors: the purpose of the AI and its delivery model. AI that is internally developed specifically as a research and development (R&D) tool – such as drug discovery models in pharmaceutical companies or materials simulation in advanced manufacturing – likely falls under ASC 730, where costs generally are expensed as incurred. AI developed for operational purposes, whether internal or customer-facing, generally will follow one of two capitalization frameworks depending on whether customers ultimately receive a software license or merely access a hosted service.
This delivery model distinction is where the accounting tends to diverge most sharply and where the financial statement implications often are underestimated.
Many AI initiatives, including a range of applications that serve external customers, likely will be evaluated under ASC 350-40 as internal-use software. An important question that arises in this context: Does serving external customers change the classification from internal use to external use?
ASC 350-40-15-5 entertains the idea that internal-use software may be accessed by third parties, suggesting that the relevant inquiry is about the substance of the arrangement – specifically, whether customers receive a software license or merely access a hosted service. This determination can require careful analysis, particularly for arrangements that blend elements of both.
For entities that have not yet adopted ASU 2025-06, the stage-based framework remains operative: Costs in the preliminary project stage are expensed, costs in the application development stage are capitalized, and post-implementation costs are expensed. The challenge of mapping development activities to those stages was a principal driver of the FASB’s decision to replace the framework. The remainder of this discussion focuses on ASU 2025-06’s new model, which eliminates the stage-based approach in favor of a principles-based framework centered on when significant development uncertainty is resolved.
Under ASC 350-40 as amended by ASU 2025-06 – effective for all entities for annual periods beginning after Dec. 15, 2027, with early adoption permitted – capitalization begins when two conditions are met: Management has authorized and committed to funding the project, and it is probable that the project will be completed and the software will be used to perform its intended function. The FASB refers to this as the “probable-to-complete recognition threshold.”
Evaluating this threshold requires assessing whether significant development uncertainty exists. The ASU provides two factors relevant to this assessment. Development uncertainty is significant – and capitalization therefore cannot yet begin – when either of the following is present:
For AI projects, these factors raise questions that often don’t have easy answers. When does an AI architecture cross the line into “novel, unique, or unproven” territory? At what point has a proof of concept genuinely resolved uncertainty about functions and features versus merely suggesting they might be achievable? When are performance requirements sufficiently stable that they are no longer “substantially revised”?
The FASB acknowledged that these assessments will involve significant judgment and will vary by facts and circumstances. Finance leaders should expect that applying these concepts to AI – which by its nature involves experimentation, iteration, and evolving performance targets – will require careful analysis and robust documentation. In many novel AI development efforts, significant development uncertainty might persist well into the development cycle.
Notably, the FASB has indicated it expects that the new framework might result in more development costs being expensed under ASC 350-40, particularly for software-as-a-service (SaaS) providers and entities developing software with novel features. This is a meaningful consideration for entities that have relied on the prior stage-based model to support earlier capitalization.
For AI being developed as a licensed software product – where customers take possession of the code – ASC 985-20 governs. The capitalization threshold under ASC 985-20 is technological feasibility, which can be established through two paths: completion of a working model whose consistency with the product design has been confirmed through testing, or completion of a detail program design that confirms the product definition, identifies the remaining programming requirements, and demonstrates that all high-risk development issues have been resolved through coding and testing. In practice, the working model path is more common for AI development. Regardless of path, the consequence for AI projects is the same: Feasibility tends to be established relatively late in the development cycle.
A key question for AI product development is how to identify and assess “high-risk development issues” in an AI context. Model accuracy uncertainty, training data adequacy, scalability, and integration complexity are among the issues that qualify – and they often remain unresolved through significant portions of the development cycle. As a consequence, entities developing licensed AI products might find that technological feasibility is established relatively late in the process, with only a narrow window of post-feasibility costs qualifying for capitalization. The specific timing will depend on the facts and circumstances of each project and warrants careful analysis.
The potential gap in accounting outcomes between ASC 350-40 and ASC 985-20 illustrates why delivery model decisions deserve early accounting input. A hosted SaaS build and an on-premises licensed build of the same AI feature can face meaningfully different capitalization timelines. The FASB has indicated it expects ASU 2025-06 to narrow this gap in most cases, though differences might persist in AI development contexts where significant development uncertainty is more prolonged and judgment-intensive than in traditional software projects. Either way, finance leaders should be involved before go-to-market strategy is set.
Subscriptions to enterprise AI tools typically will be evaluated as hosting arrangements, where the vendor operates the infrastructure and delivers functionality over the internet. In these cases, the subscription fee generally is a service cost expensed over the contract period. The more nuanced accounting questions arise in implementation.
Connecting a third-party AI tool to an organization’s existing systems, data, and workflows can involve significant investment in configuration, application programming interface (API) integration, custom workflow development, and testing. These costs are not automatically treated the same as the subscription itself. Under ASC 350-40, the question is whether implementation costs meet the capitalization criteria – and an answer requires evaluating whether significant development uncertainty exists with respect to the implementation activities themselves.
For standard implementations where the vendor provides a working product and the entity is configuring and connecting rather than developing novel functionality, uncertainty might be resolved relatively early, and some implementation costs might be capitalizable. For implementations involving complex, novel integrations or meaningful uncertainty about whether the tool can function as needed within the entity’s environment, the analysis might be less straightforward. Key questions include: Where is the line between capitalizable configuration and customization and expensed training and administrative setup? Does the level of integration complexity create significant development uncertainty? Each of these questions requires judgment specific to the facts at hand and should be carefully evaluated, particularly when implementation costs are material.
Custom AI development – such as a fraud detection model, an internal analytics platform, a customer service chatbot built on a third-party large language model API (an API that provides access to an externally trained AI model), or a proprietary foundation model (a large-scale AI model trained from scratch on broad datasets that can serve as a base for multiple downstream applications) – raises the most judgment-intensive accounting questions and carries the greatest potential for capitalization variability across entities and over time.
The foundational question is when, if ever, the probable-to-complete threshold is met. Prior to that point, costs are expensed. This means proof-of-concept work, technology evaluation, architecture exploration, and early experimentation are period costs. Once significant development uncertainty is resolved and management commits to completion, eligible development costs may be capitalized. Capitalizable costs may include certain external direct costs of materials and services consumed in developing the software, direct internal labor for employees devoting time to the project, and applicable interest costs. General and administrative costs, overhead, and end-user training are excluded regardless of timing.
Several AI-specific cost categories raise questions that existing guidance does not address with specificity. In those cases, finance leaders should expect to exercise and document significant judgment.
Training data. How should organizations account for the data used to train AI models? One question is whether training data has value beyond the specific AI project at hand – whether it might support future models, inform business analytics, or serve other purposes. When data has potential alternative future use, entities might need to consider whether the data should be evaluated for separate recognition as an intangible asset under ASC 350-30 rather than simply folded into the software development cost. The analysis will depend on the nature and characteristics of the data, how it was acquired and prepared, and the range of purposes it could reasonably serve.
When training data is acquired specifically for a single AI project with no broader use, the accounting becomes less straightforward, and significant judgment is required. Such data would not qualify for separate recognition as an intangible asset under ASC 350-30 given the absence of alternative future use. The more difficult question – and when diversity in practice is most likely to emerge – is whether data acquisition costs fall within the scope of ASC 350-40 and, if so, whether they constitute capitalizable direct costs. Some view such costs as outside ASC 350-40’s scope entirely because data does not represent a material or service consumed in developing the software; others take the position that data acquired to train an application during the development period may be capitalized as a direct cost. If the costs are not capitalizable under ASC 350-40 – whether because they fall outside its scope or because the development has an R&D characteristic bringing the costs within ASC 730-10 – the practical outcome is the same: expensing as incurred. The unresolved question is whether any path to capitalization exists at all.
Cloud computing. AI model training consumes significant cloud graphics processing unit (GPU) resources, and a key question is how to distinguish capitalizable development costs from ongoing operating expenses. The framework contemplates capitalization of costs directly attributable to development, which suggests that incremental computing capacity specifically provisioned for model training may be distinguishable from baseline cloud capacity used for general operations, but the methodology for making that attribution must be documented and applied consistently. Finance leaders should consider how their cost tracking infrastructure supports this distinction before significant training workloads begin.
LLM API fees. When an entity uses a third-party large language model (LLM) API to build and test an AI application, a question arises as to whether those API costs during development are capitalizable external direct costs or period operating expenses. The answer might depend on how clearly usage associated with developing the AI application can be distinguished from production usage – a distinction that is easier to support with separate accounts, credentials, or environments than with after-the-fact allocation. How entities establish and document this distinction is likely to vary in practice.
Once an AI system is deployed, an ongoing question is whether subsequent activities constitute capitalizable enhancements or expensed maintenance. Bug fixes, minor optimizations, and routine model retraining to prevent performance drift generally are considered maintenance. Activities that add new capabilities, expand use cases, or make material architectural changes are more likely to be enhancements subject to the capitalization framework. In practice, the distinction is not always obvious, and entities should expect this judgment to arise regularly throughout the life of an AI asset.
Determination of useful life also warrants careful thought. The rapid pace of AI advancement, the risk of model performance decay, and the potential for publicly available models to surpass proprietary ones raise questions about how long a custom AI asset will remain economically useful and whether the useful life assumptions made at deployment will remain appropriate over time. These are areas where regular reassessment and clear documentation of the factors considered will be important.
For software companies embedding AI capabilities into existing products, the accounting questions again begin with delivery model. Entities delivering AI-enhanced software as a hosted service generally will evaluate AI feature development as an enhancement to internal-use software under ASC 350-40 – raising questions about when the probable-to-complete threshold is met and which development costs qualify for capitalization. Entities delivering software via perpetual or term license will face the technological feasibility requirement under ASC 985-20 – raising questions about what constitutes a high-risk development issue in an AI context and when those issues are resolved sufficiently.
The same underlying questions about training data, computing costs, LLM API fees, and the maintenance-versus-enhancement distinction that arise in scenario 2 are equally present in scenario 3, compounded by the need to distinguish AI feature development costs from the broader product maintenance and development cost pool. Finance leaders should consider how their project accounting structure supports that segregation.
ASU 2025-06 clarifies that capitalized software costs subject to ASC 350-40 must follow the disclosure requirements of ASC 360-10, “Property, Plant, and Equipment,” superseding prior practice of applying intangibles disclosures from ASC 350-30, “General Intangibles Other Than Goodwill.”
Beyond the baseline requirements, the question of what additional disclosures are appropriate – or expected by investors, auditors, and regulators – is one that entities should think through carefully. The judgments involved in AI cost accounting are inherently subjective and entity-specific. How did management assess significant development uncertainty? What factors informed the determination that the probable-to-complete threshold was met? What assumptions underlie useful life estimates, and how might rapid AI advancement affect the estimates? These are questions that sophisticated stakeholders are increasingly asking.
AI investment levels and their financial statement treatment are attracting growing scrutiny from investors and, potentially, regulators. Entities that have developed clear, well-documented approaches to these judgments will be better positioned to explain and defend their accounting – and to provide disclosures that give stakeholders genuine insight rather than boilerplate information. This is an area where practice is still forming and where early, thoughtful engagement with disclosure questions is critical.
Sound AI accounting requires more than technical knowledge of the applicable standards. It requires operational infrastructure that supports accurate cost capture, contemporaneous documentation of critical judgments, and accounting considerations embedded in project governance from the outset.
Perhaps the most important question to ask is whether the finance department is involved early enough. By the time a significant AI project is several months into development, questions about when significant development uncertainty was resolved – and therefore when capitalization should have begun – become simultaneously consequential and difficult to answer with contemporaneous evidence. Finance leaders who participate in AI project governance from inception are better positioned to ensure that accounting judgments are made and documented in real time rather than reconstructed under pressure at quarter close.
Accounting checkpoints tied to project milestones provide a practical structure. At project initiation, the applicable accounting model is considered and documented. At the potential transition into development activity, a determination is made and memorialized that documents the assessment of significant development uncertainty, the basis for any conclusion that uncertainty has been resolved, and evidence of management authorization. At deployment, the useful life determination is documented with the factors considered. For post-deployment enhancements, the maintenance-versus-enhancement judgment is documented contemporaneously.
Cost tracking infrastructure should be designed to support these determinations. The ability to segregate AI development costs by project and activity period, allocate direct labor costs to specific projects, attribute incremental cloud computing usage to development activities, and distinguish development-period API usage from production usage all require planning before significant costs are incurred.
Cross-functional collaboration between finance and data science is equally important and often underdeveloped. Finance leaders assessing significant development uncertainty need to understand what a novel function or feature looks like in a specific AI context, when a proof-of-concept result genuinely validates a technical approach versus merely suggesting it might work, and how iterative AI development processes relate to the concept of “substantially revised” performance requirements. These are not questions finance leaders can answer alone. Building the working relationship between finance and technical teams is one of the most valuable investments an organization can make in AI accounting infrastructure.
The accounting frameworks governing AI cost capitalization exist and provide structure. Applying the frameworks to AI requires engaging seriously with questions that don’t yet have universally settled answers.
Is the organization’s AI investment properly classified? The purpose of the AI, its delivery model, and the substance of related arrangements all bear on which accounting standard governs, and the financial statement consequences of getting the classification wrong can be material.
How is the organization evaluating and documenting significant development uncertainty? The FASB’s new framework places this assessment at the center of the capitalization decision, and the specific two-factor definition in the ASU provides more structure than a general notion of “uncertainty” might suggest. Entities should be asking how they translate those factors into a workable evaluation process for the types of AI projects they are undertaking.
Are AI-specific cost categories being analyzed with sufficient rigor? Training data, cloud computing usage, LLM API fees, and the ongoing maintenance-versus-enhancement distinction are areas where existing guidance does not provide complete answers and where entity-specific judgment – supported by clear policy and documentation – will be essential.
Is the accounting infrastructure in place before the spending begins? Cost tracking systems, time recording practices, cloud environment organization, stage gate processes, and cross-functional collaboration structures are far easier to build proactively than to retrofit.
Is the organization’s disclosure approach keeping pace with the sophistication of the judgments being made? As AI investments grow and the scrutiny around them intensifies, the quality of disclosure in this area matters – both for stakeholder transparency and for the entity’s ability to explain and defend its accounting.
These questions don’t have one-size-fits-all answers. But organizations that are asking them – and engaging technical accounting expertise to work through them – will be better positioned to account for AI investments accurately, consistently, and with the rigor that the complexity of these issues demands.
FASB materials reprinted with permission. Copyright 2026 by Financial Accounting Foundation, Norwalk, Connecticut. Copyright 1974-1980 by American Institute of Certified Public Accountants.