Introduction
It began with six words in a blank box. A minute later, I was looking at a moody, gallery-ready portrait—a face I never lit, a lens I didn’t own, a studio I had never booked. I could fine-tune the shadows, switch to a different ‘camera,’ and regenerate the image until one felt right, then claim the outcome as my own creative work. The thrill was undeniable, yet it was shadowed by a deep-seated unease. Generative systems, at their core, are instruments that reassemble patterns from vast datasets of human creation. For anyone engaged in learning or making today, they force several pressing questions: How do we assess the true creativity of an AI-assisted artifact? Who holds the rights of authorship? And what do we owe the countless creators whose work was absorbed into the training data without their consent? My position on this is clear. We must judge creativity by evaluating the final artifact against established field standards. Authorship must be assigned to the human who exercises meaningful control over the expression—through deliberate selection, arrangement, and revision. Furthermore, the use of training data demands a governance model built on transparency and fair compensation, ensuring innovation does not quietly exploit creative labour. These are not abstract philosophical points; they are practical principles I can immediately apply in both the studio and the classroom. I keep dated drafts and edit logs so that my expressive decisions are auditable.
Traditional Creativity and Originality
Long before the first text prompt was ever typed, creative fields had developed a method for evaluating work that did not depend on deciphering an author’s soul. This pragmatic approach is artifact-first and deeply rooted in tradition. The critical questions are: Is the piece novel and valuable within its specific lineage? Does it advance an idea, take a discernible risk, demonstrate technical skill that unifies the composition, and engage in a dialogue with its predecessors? Historically, editors and competition juries have rewarded these very qualities, not the purity of the tools used. Originality, in this context, is better understood as a threshold concept. It does not demand that a work be utterly unprecedented in every aspect; rather, it asks whether the work exhibits sufficient distinct expressive choice to be considered a genuine contribution. A collage earns its originality from how its pieces are chosen and arranged—the new meaning emerges from those decisions. By the same token, a track built from samples can be original when composition and production fuse the fragments into a coherent whole rather than a stitched-together copy. Long before generative systems, traditions of collage, photomontage, ready-mades, and sampling established that selection and arrangement themselves can constitute creative labor (see also a curator-friendly explainer from MoMA on collage and Tate on photomontage). Seen this way, the rise of models that generate convincing prose or images doesn’t invalidate the core criteria for originality; it complicates how we apply them and increases the responsibility on creators to carefully document their own contributions. Major museums routinely frame novelty “within a lineage,” judging how a work extends conventions rather than whether every mark is unprecedented (for a readable doorway into this perspective, see Tate’s display note on Collage/Assemblage).
The Challenge of Generative AI

Consider a viral, AI-made image that fooled millions at a glance—the so-called “Balenciaga puffer jacket” Pope (TIME explainer / CBS recap), or the fake “Pentagon explosion” photo that briefly rattled markets before being debunked (Sky News). These episodes show how quickly perception can outpace verification. Generative AI unsettles our perceptions more than it overturns our standards. Its outputs can deceive at a glance, and its workflows blur the boundary between human judgment and automated patterning. One helpful approach is to stay anchored in the artifact itself. In their study of machine-generated artworks, Mazzone & Elgammal (2019) argue that creativity can be legitimately ascribed when informed observers judge an artifact to possess novelty and value within a recognized tradition. According to this view, a work can be deemed creative regardless of whether a human executed every single mark; what truly matters is whether the final result fulfils the field’s criteria for novelty and value. The practical application of this is immediate and powerful. Studios and classrooms can mandate the disclosure of tool use and then apply the same rigorous criteria they would use for a darkroom photograph or a digital audio work: composition, coherence, risk-taking, and fit to genre. This balanced stance wisely avoids two pitfalls: the romantic fallacy of treating the AI as the artist, and the panic-driven reaction that any AI involvement automatically disqualifies a work.

The legal system, however, draws a much firmer boundary around authorship, and it hinges unequivocally on human control. The U.S. Copyright Office’s 2023 guidance makes the rule plain: purely AI-generated material cannot be registered, whereas AI-assisted works may be protected if a human’s creative input—through selection, coordination, arrangement, or revision—rises to the level of authorship (official Federal Register notice). This principle was concretely applied in the “Zarya of the Dawn” decision: the Office granted authorship for the text and panel sequencing, but denied protection for Midjourney-generated images not under the author’s precise control (official letter PDF). The message is not a prohibition on using these tools, but a forceful insistence on candour and demonstrable judgment: creators must be able to show exactly where their own expressive decisions shaped the final work.
Legal scholarship provides a practical framework for implementing this boundary. Kahveci (2023) identifies the core issue as the “attribution problem”: generative outputs rarely trace back to a single, controlling originator in the way copyright law traditionally requires. Her analysis reinforces a practical principle: legal protection should follow meaningful human control—contributions that extend far beyond trivial prompting and instead constitute significant acts of selection, arrangement, and revision that actively shape the final expression. In the context of a classroom, this principle sets a clear, actionable standard: maintain detailed process notes, save iterative versions, and be prepared to pinpoint the unique choices that only I, as the human creator, could have made. When these choices are visible and documented, my claim to authorship is robust; when they are absent, that claim crumbles.
This clarified perspective also helps untangle our language. To state that an AI-assisted artifact can be creative is not equivalent to calling the AI an author. Creativity is a judgment we make about a work; authorship is a legal status meant to reward identifiable human contributions. Keeping these categories separate helps us avoid false dilemmas. We can recognize the real aesthetic force of some AI-assisted outputs while insisting that legal rights vest only in a human who exercised meaningful control. We can encourage studio experimentation and still argue that training on protected works must be transparent and compensatory. These ideas aren’t opponents; they’re the map that helps creators, educators, and companies navigate new terrain without stumbling.
Examples

The controversy at the Colorado State Fair is a clear case study. In 2022, Jason M. Allen’s “Théâtre D’opéra Spatial”—created using Midjourney and then refined—won first prize in the fair’s digital arts category. The public backlash was swift and fierce (VICE roundup of outrage and reactions;NYT feature recap). For some, the outcome demonstrated that machine-assisted work could legitimately satisfy a jury trained to assess novelty and value within a defined category. For others, it felt like a rule violation, a form of cheating smuggled in under the cloak of technology. What the episode truly reveals, however, is more straightforward and ultimately more useful: the evaluators simply continued doing what they have always done—judging the artifacts presented to them against the category’s criteria—while the public grappled with the ambiguous status of the new instrument. If a category’s rules permit digital composition and its jurors reward composition, coherence, and risk-taking, then any piece that excels in these areas can win, irrespective of the tool used. The rational response, therefore, is not to panic about “fake” art, but to demand clear disclosure and ensure that a category’s expectations are explicit from the outset. The Smithsonian’s reporting captured both the jury’s rationale and the public’s confusion; viewed this way, the result appears less as an anomaly and more as a stress test of our long-held evaluative habits .

A different battleground emerges when we consider the training data that makes these models possible. The lawsuit filed by Getty Images v. Stability AI (UK) is a bellwether for how legal systems will address the industrial-scale ingestion of copyrighted works. Getty alleges that millions of its licensed photographs were scraped and used to train Stable Diffusion without permission, citing that some AI-generated outputs even retained a distorted Getty watermark as evidence of copying (Associated Press trial preview). The High Court’s January 2025 judgment set the procedural stage, with the subsequent trial focusing on jurisdiction, proof, and market effects (official EWHC 38 (Ch) judgment page). Read alongside Kahveci’s explanation of the attribution problem, the lesson is stark: even if an AI-assisted artifact can be judged creative, and even if a human can claim authorship through meaningful control, the industry still needs transparent data pathways and compensatory mechanisms for the human creators whose work trained the model. In practical terms, this points toward solutions like opt-out registries that models must respect, collective licensing for concentrated markets like stock imagery and music, and robust provenance tools capable of withstanding audit. These mechanisms are not impediments to creativity; they are the essential infrastructure that will allow powerful new instruments to coexist ethically with the creative labour that made them viable in the first place.
Conclusion
Generative systems are neither a magical cheat code for producing art without artists, nor an apocalyptic force that renders human effort obsolete. They are, fundamentally, instruments whose power is ultimately dictated by the human judgment that guides them. If we agree that creativity is judged at the level of the artifact, then a compelling AI-assisted piece deserves serious consideration when it meets a domain’s standards for novelty and value (Mazzone & Elgammal, 2019). If we define authorship as a legal status that rewards identifiable human contributions, then rights must vest where a person exercises meaningful control—through the selection, arrangement, and revision that shapes expression—and these contributions must be disclosed with complete candour (U.S. Copyright Office, 2023a; U.S. Copyright Office, 2023b; Kahveci, 2023). And if we acknowledge that training data is the essential fuel for these systems, then transparent and compensatory governance is not a luxury. It is the non-negotiable price of a sustainable creative ecosystem, one that allows innovation to flourish without systematically strip-mining the very creative labour it depends upon (UK Courts & Tribunals, 2025; Associated Press, 2025). These are the principles I can act upon today. As a student, I can keep meticulous process notes, submit my work with full disclosure, and welcome evaluation based on the final artifact. As an aspiring professional, I can advocate for licensing models that pay for high-value datasets and demand provenance tools we can trust. The settlement I envision is straightforward: it must address money, visibility, and provenance. We must pay for what we use, properly reward the human work that shapes expression, and maintain records thorough enough to assess harm and negotiate fair terms. Achieving this does not require us to pretend that models can think; it demands that we remember, and insist, that people do.