As artists in animation and games, we all spend time crawling a sprawling collection of models, textures, and materials scattered across local drives, cloud storage, and countless online marketplaces. Finding the right asset, ensuring consistency in naming conventions and file types, tracking licenses, etc. is an enormous drain on time that we would otherwise spend being creative.
That's precisely the problem my MFA Games thesis project Asset Management Framework aims to solve. This project isn't just about dumping files into a folder; it's about building an intelligent, unified backend so that you can spend more time being creative, and less time wondering "Shoot, where did I put that model?". This article explores the architectural choices I've made ahead of time.
Problem Statement: Assets Everywhere
The primary goal: centralize all assets – both those created locally and those downloaded from third-party sources – into a single, searchable, organized system. This backend would then serve as the powerful engine behind user-friendly plugins in all our favorite DCC's (Blender, Maya, Houdini, etc.).
The Original Contenders
-
Monolith
- What it is: This is basically just throwing all your code into one or two files.
- Why I didn't pick it: This becomes a nightmare for anything more complicated than a simple script.
-
Event-Driven
- What it is: Different parts of the system don't interact directly; rather, they emit events, which other parts of the system listen for.
- Why I didn't pick it: EDA is great for massive, highly decoupled systems, especially if you're building something like Twitter or Netflix. It's amazing for scalability and resilience. However, it adds a lot of complexity right out of the gate.
- More moving parts: Requires a message broker, which is another piece of software to install, configure, and manage.
- Harder to debug: The flow of information isn't nonlinear. If something goes wrong, tracing it through events can be very difficult.
- Overkill for a single dev: The overhead of setting up and managing an event-driven system would slow me down significantly, without offering a proportional benefit for the initial scope, especially considering the time constraints of the project (one academic year).
-
Feature-Based/Domain-Driven:
- What it is: Instead of layers like "API" and "Services," you organize your code by feature, like "Assets Module," "Users Module," "Downloads Module." Each module contains its own API bits, logic, and database interactions.
- Why I considered it, but ultimately folded it into Layers: This one is fantastic for large teams or very distinct, isolated features. It helps keep all code related to a specific domain together.
- Why Layers won (for now): While beneficial, the core principles of separating API from business logic from data access still apply within each feature module. So, in a way, I'm still using a layered approach, but within specific domains that live in my
services
orexternal
folders. For a solo dev or small project, the strict top-level feature-based separation can sometimes feel a bit redundant or introduce too many small files if not carefully managed. The benefits of extreme isolation really shine when different teams are working on completely different features with minimal overlap. My current approach provides a clear structure that's easy to grasp and navigate for this size of project.
A Layered Approach
So, after weighing the pros and cons, I chose the traditional layered architecture because:
-
Clarity & Understanding: It's intuitive. When I look at the code, I know exactly what each folder and file's job is. The API goes in
api/
, business rules inservices/
, and database bits indata_access/
. This makes it easier for future me to understand what I was thinking, and for anyone else who might interact with the project to pick up the way it's structured very quickly. -
Maintainability & Debugging: Problems are generally easy to trace back to a layer.
-
Testability: Each layer can be tested independently. I can write tests for my business logic in
services
without needing a live API or even a real database running. This makes testing faster, more reliable, and catches bugs earlier. -
Separation of Concerns: This is the big one. Each layer only knows about its immediate neighbors. The API layer doesn't care how the asset is saved, just that the
AssetService
will handle it. TheAssetService
doesn't care how the data is stored in the database, just that thedata_access
layer will handle it. This reduces tight coupling, so changes in one layer are less likely to break another. -
Scalability (Future-Proofing): While it starts as a single application, this layered structure makes it easier to scale horizontally if needed. For example, if my
External Integration
layer becomes a bottleneck, I could eventually pull that out into its own separate microservice, and the rest of the application wouldn't need to change much because of the clear boundaries. It doesn't force me into microservices upfront, but it doesn't prevent them either.
In short, the layered approach provides a balance of simplicity, clarity, and flexibility. It allows me to build a robust system efficiently without getting bogged down in unnecessary complexity, but could still scale well in the future.