on
Rethinking Object-Oriented Programming as 'Terms and Definitions' — What Lies Before OOP Principles
⚠️ Disclaimer: The writing of this article was assisted by generative AI, based on aka's ideas.
Hey, it's aka.
In a previous article, I wrote, "I plan to write separate articles about OOP, TDD, and DDD." So this time, I'm writing about OOP and DDD. However, this is not a textbook explanation of OOP — it's a personal interpretation.
In this article, I'll rethink OOP through the lens of "object = term". I'd be happy if you come away thinking, "Oh, that's an interesting way to look at OOP."
Programming Is the Act of Writing Terms and Their Definitions
Before diving into OOP, let me share one premise.
Programming is the act of writing terms and their definitions.
Since it's a programming language, this might seem obvious. Yet in practice, most people think of programming as "writing processes" rather than "writing terms and their definitions." Of course, there are aspects like computation, control flow, state transitions, and I/O. But when we turn those into code, we create terms in the form of function names and class names, and write their definitions.
In a program, the terms you establish behave exactly as their definitions say. The terms you chose work with the meanings you gave them. Let's look at a concrete example.
// In an order management system for an e-commerce site
class Order {
private final List<OrderLine> lines;
private final Money totalAmount;
public void confirm() { ... }
public void cancel() { ... }
}
Here, Order is the term. And having line items, having a total amount, being confirmable, and being cancellable — that's the definition of Order. The term decides "what to call it," and the definition decides "what it is."
In code, terms and definitions appear in several forms:
- Classes/types introduce a term and give its definition through the entirety of its properties and methods.
- Interfaces are a mechanism that forces different terms to write a common definition. For example, "a payment method is something that has
pay()." - Methods turn actions into terms. For the term
Order.confirm, the processing inside is its definition.
The relationship between terms and definitions changes depending on the granularity you're looking at. Let's revisit the Order example:
Class layer:
Term = Order
Definition = lines, totalAmount, confirm(), cancel()
Method layer:
Term = Order.confirm
Definition = change status to confirmed, record confirmation time, ...
At the class layer, Order is the term, and the entirety of its methods and properties is its definition. Step down to the method layer, and Order.confirm becomes the term, with the processing inside becoming its definition. A "part of the definition" at the upper layer becomes a "term" at the lower layer. At any granularity, there's a pairing of term and definition.
So why should we care about this pairing? Because it's not just a coding style — it's the foundation of software quality. Appropriate terms and their definitions have at least four powers:
- Cognitive compression. When the term
order.confirm()exists, you can treat it as "confirming an order" without being aware of the details of its definition each time. Complexity is contained within the definition, and you can communicate using just the term. - Foundation for reasoning. Well-chosen terms make it easier to reason about the next design decision. The thought "If
Orderhasconfirm(), shouldn't it also havecancel()?" comes naturally. - Shared foundation. When terms exist, the team can more easily refer to the same concept with the same word. Instead of "that process" or "that thing," you can have conversations using clear names.
- Change resilience. When terms and definitions correspond properly, the impact of changes becomes more predictable. "I want to change the order confirmation logic" leads you to look at the definition of
Order.confirm()— it serves as a guide.
The impact on maintainability is particularly significant.
Terms need to be carved at appropriate granularity, and definitions should match that granularity. Terms that are too coarse lose their function as terms. The Order above was a definition focused on a single phase — "an order that can be confirmed or cancelled." But what if we crammed responsibilities for every phase into this single Order's definition? The cart phase where items are added, the confirmed phase after payment clears, the phases during and after delivery — what happens when we press all of these into a single Order?
order.addItem(item); // Should only be callable in the cart phase
order.confirm(); // Only meaningful before confirmation
order.requestRefund(); // Only meaningful after fulfillment
You can't tell from the code alone which method should be called when. A new developer would have to trace internal status flags before understanding, "Ah, so that's what this Order means." If the terms had been separated from the start — Cart, ConfirmedOrder, CompletedOrder — the term alone would convey what state it's in and what can be done with it. This is why the correspondence between terms and definitions directly impacts maintainability.
One note here: This premise is not specific to OOP. Whether functional or procedural, programmers create terms through function names and type names, and write their definitions. However, different paradigms lead to different ways of carving terms and different forms of definitions.
For example, in functional programming, terms and their definitions are expressed through algebraic data types:
-- "OrderStatus" means one of: InCart, Confirmed, or Completed
data OrderStatus = InCart | Confirmed | Completed
-- "Order" is order lines plus a status
data Order = Order { orderLines :: [OrderLine], status :: OrderStatus }
-- "confirm" transforms an order into a confirmed order
confirm :: Order -> Order
confirm order = order { status = Confirmed }
In data OrderStatus = InCart | Confirmed | Completed, OrderStatus is the term, and "one of: in cart, confirmed, or completed" is its definition. In OOP, classes introduce terms and give definitions through properties and methods. In FP, types introduce terms and give definitions through functions over those types. The same concepts are carved out with different tools.
Let's compare with the OOP-side code:
// OOP-style: the order object owns state and changes itself
class Order {
private OrderStatus status;
public void confirm() {
this.status = OrderStatus.CONFIRMED;
}
}
In OOP terms, the definition of Order.confirm is "the order changes its own state." In FP terms, the definition of confirm is "a transformation that produces a new confirmed order." The forms of the definitions differ. But both are establishing the same terms — Order and confirm — and giving each a definition.
The Roles of OOP and DDD
If programming is the act of writing terms and their definitions, the next question becomes: What terms should we establish, and what should their definitions contain?
When establishing the term "order," what should its definition include? At what granularity should it be carved? There's too much freedom — "write terms and definitions" alone doesn't give you answers. That's where OOP and DDD come in as tools.
- OOP provides an expressive form that makes it easy to write human perceptions as terms and definitions. The perception "an order is something that can be confirmed" can be written as the term
Orderwithconfirm()as part of its definition. - DDD determines the direction — whose terms are these? Are they for the database? For the framework? For the domain (business area)? DDD distinguishes these and centers the domain's terms.
Layer 1: Programming is writing terms and their definitions (paradigm-independent principle)
↓
"So what terms should we establish, and how should we define them?"
↓
Layer 2: OOP and DDD serve as tools
- OOP: An expressive form for writing perceptions as terms and definitions
- DDD: A way of thinking that directs terms toward the domain
From here, I'll dig deeper into Layer 2 — how OOP and DDD help with terms and definitions.
Why OOP Is Suited for Writing Terms and Definitions
OOP (Object-Oriented Programming) is generally described as "a paradigm that bundles data and related operations into objects, composing programs through interactions between objects." Why is this object mechanism suited for writing terms and definitions? There are three main reasons:
- Objects make it easy to turn perceptions directly into terms
- Mechanisms like encapsulation and abstraction help maintain boundaries between terms and definitions
- Subject-verb relationships can be naturally fixed in code
Object = Perception Turned into a Term
In the general definition, an object is "something that bundles data and operations." But to me, an object is a human perception fixed as a term in code.
What's important here is that it's not "copying the real world as-is." What OOP captures is the result of how the real world was carved out. When humans look at reality and perceive "there's a boundary here" or "this is a coherent unit," that carving is what gets translated into code. If the system's purpose changes, the carving changes even for the same real world. I use the word "perception" here because this carving is not objectively unique — what constitutes a unit and where boundaries are drawn depends on the observer's position and purpose.
For example, suppose you're building a reservation system for a pet salon:
// "Dog" in a pet salon reservation system
class Dog {
private final Name name;
private final Owner owner;
private final Breed breed;
}
This Dog is not a complete biological model of a dog. "What is a dog to a pet salon reservation system?" — it has a name, has an owner, and has a breed. There's no organ structure, genetic information, or bark(). Those aren't concerns of this system.
In other words, an object as a term is not a copy of something that exists, but a unit of perception carved out for a purpose. And the contents of the class become the definition of that perception.
OOP Principles Are Means, Not Essence
With this perspective, the positioning of OOP principles looks a bit different. Let's use a racing game as an example:
// "Car" in a racing game
interface Car {
void accelerate();
void brake();
void steer(Direction direction);
}
// "Player": someone who drives a car in the race
class Player {
private final PlayerId id;
private Car car;
public void drive(Direction direction) {
car.accelerate();
car.steer(direction);
}
}
Car doesn't include vehicle inspection dates or insurance contract details. In a racing game, if it can accelerate, brake, and steer, it's a "car." And Player doesn't know the internals of Car's definition. All it needs is the surface of the term — "a car can accelerate, brake, and steer."
This is similar to the relationship between humans and cars. Humans can drive a car without knowing how the engine works. All that's needed is knowing how to interact with it. When you turn perceptions directly into terms, that relationship naturally appears in code.
The fact that Player can drive without knowing Car's internals is close to encapsulation. Beyond encapsulation, OOP principles — abstraction, polymorphism, inheritance — are all, in my interpretation, means, not essence. The essence is giving perceptions a term and writing its definition. These principles are tools for handling terms and definitions well. If you mistake them for the essence, it's easy for satisfying a principle to become the goal itself. If you see them as means, "Are the right terms established, and are their definitions neither too much nor too little?" becomes your criterion.
If the criterion lies in terms and definitions, the commonly used word "responsibility" in OOP contexts can also be reframed through this lens. "What is this class's responsibility?" means "what should be included as this term's definition?" The Single Responsibility Principle (SRP) saying "a class should have only one reason to change" can be rephrased as: a term and its definition should be cohesive around a single perception.
Let's look at another example:
// Common definition of a payment method: something that can pay
interface PaymentMethod {
PaymentResult pay(Money amount);
}
class CreditCard implements PaymentMethod {
public PaymentResult pay(Money amount) { ... }
}
class BankTransfer implements PaymentMethod {
public PaymentResult pay(Money amount) { ... }
}
The PaymentMethod interface forces different terms — CreditCard and BankTransfer — to write the common definition pay(). That's why both can be treated as the same "payment method." Here too, OOP principles work as means for properly organizing terms and definitions.
OOP's Syntax Naturally Expresses the Correspondence Between Terms and Definitions
There's another reason OOP is suited for writing terms and definitions: its syntax.
// OOP-style: the term (object) acts by itself
order.confirm(paymentInfo);
// Procedural-style: the term (data) and the operation are separated
confirm(order, paymentInfo);
order.confirm() directly writes the perception "an order is something that can be confirmed." The subject-verb relationship is reflected in the code.
That said, confirm(order) isn't bad. It's just a different way of carving. However, in OOP, behavior belongs to the type. That means "confirm" becomes part of Order's definition, so the term's definition includes where responsibility lies. In FP, behavior exists outside the type, so the term and responsibility attribution are separated. When you want to fix a human perception, responsibility and all, to a subject, OOP's syntax feels natural — that's my sense of it.
What Happens When Terms Are Carved Poorly
OOP has the power to turn perceptions into terms. So what happens when terms are carved not by purpose-driven perception but solely by technical convenience?
The Problem with Setters as a Carving Approach
Setter culture is the most visible example of this misalignment.
order.confirm();
In this form, order reads as a term meaning "an order that can be confirmed." Subject and verb are connected, and you can see not just what's happening but what it's being treated as.
Now, what happens in a world where the term confirm() doesn't exist? For example, code like this appears inside a service class:
// A process buried somewhere in OrderService.java
order.setStatus(CONFIRMED);
order.setConfirmedAt(now);
payment.setAuthorized(true);
What's here is certainly processing. But it's no longer the cognitive unit of "confirming an order" — it's been decomposed into granular technical operations: changing a status, writing a timestamp, setting a flag.
What's lost here isn't just aesthetics. What's lost is the term. At the method layer, the term (Order.confirm) has disappeared, and only fragments of its definition (setStatus, setConfirmedAt) remain.
What was originally bracketed under a single term — "confirming an order" — has been unraveled into fragments of definition in the code. As a result, every time a reader encounters this code, they must mentally reconstruct:
- What term's definition did these fragments belong to? (cognitive compression is lost)
- What meaning is this update part of? (foundation for reasoning is lost)
- How much needs to happen before we can say "the order has been confirmed"? (change resilience is lost)
In other words, the cognitive unit that the code would have provided the reader — if the term had been preserved — disappears from the code's surface because only definition fragments remain without a term to bracket them. This loss of terms is precisely what tends to happen with setter/getter-centric design.
The same thing happens at even smaller scales:
// Set an email address
user.setEmail("[email protected]");
// Change an email address
user.changeEmail("[email protected]");
User.setEmail is a term carved by data structure convenience, and its definition is nothing more than "put a value into a field." Meanwhile, User.changeEmail turns the act of "changing an email address" into a term. If its definition includes rules like notification to the old email, change history recording, validation, and re-authentication, the correspondence between term and definition becomes natural and self-contained.
What Distorted the Way Terms Were Carved
So why did this kind of carving become widespread?
1. Persistence concerns flowed directly into models. ORMs strongly suggest the mapping "table column = class property." As a result, terms became projections of table names, and definitions became projections of columns. Persistence, not perception, ended up determining terms and definitions.
2. JavaBeans conventions and frameworks made property-centric design the default. If you provide getters/setters, the framework reads them. The IDE auto-generates them. This convenience is significant. But it also makes it easy to develop the habit of creating classes as collections of properties before thinking about "what terms to establish" or "what their definitions should be."
3. Division of labor tended to fragment meaning.
Separating screen, API, and DB concerns into different structures is common in practice. But when each starts carving Order according to its own convenience, "what is an order, as a domain concept?" becomes nobody's responsibility.
In short, setter-centric design spread not because it was fundamentally superior, but because it rode well on technical and developer-experience convenience.
Where It Leads: The Anemic Domain Model
As setter-centric carving progresses, the definition of a term shifts from "what it can do" to merely "what it holds." The term Order remains, but its definition becomes a mere box holding status and confirmedAt, and meaningful behavior like "confirm" gets pushed out to an external OrderService.
// Order only holds data
class Order {
private OrderStatus status;
private LocalDateTime confirmedAt;
// setters/getters only...
}
// Behavior is buried in a service procedure
class OrderProcessingService {
public void process(Order order, Payment payment) {
order.setStatus(CONFIRMED);
order.setConfirmedAt(now);
payment.setAuthorized(true);
notificationService.send(order.getEmail(), "Your order has been confirmed");
// ...other steps continue
}
}
This is the state Martin Fowler named the Anemic Domain Model. The term exists, but its definition is hollow. I interpret this metaphor of "anemia" as a term drained of its definition (= behavior), left empty.
The act of "confirming an order" dissolves into a large procedure called process, making it hard to read where the definition of "confirmation" begins and ends. Operations remain in the code. But the term that brackets them doesn't. You can trace what's being done, but what it is becomes hard to see.
So would the solution be to cram all the logic scattered across services back into the Order class? Not quite:
class Order {
public void confirm() { ... }
public void cancel() { ... }
public void sendConfirmationEmail() { ... }
public void calculateTax() { ... }
public void generateInvoicePdf() { ... }
public void syncToExternalApi() { ... }
}
This is the opposite of anemia — bloat. Tax calculation, PDF generation, and external API sync are all crammed into the definition of a single term, Order. Just moving methods into the class causes the definition to swell until the meaning of the term blurs.
What matters isn't where methods live. It's establishing terms along the lines of cognitive units and giving each an appropriate definition. For an anemic model, bring the act of "confirm" back into Order's definition. For a bloated model, carve out tax calculation as TaxCalculation and PDF generation as InvoiceGenerator — each as its own independent term. Both follow the same principle: aligning cognitive units with term units.
Of course, for structures whose purpose is data transport — DTOs, ViewModels, persistence Entities — setters are natural. Where and how you carve is what matters most.
DDD Determined the "Direction" of Terms
What OOP Was Missing
OOP is a powerful paradigm with the "power to establish terms and give them definitions." However, OOP itself has no mechanism for distinguishing "Whose terms are these? The domain's? The DB's? The framework's?" It has power, but the direction isn't determined automatically. That's exactly why it can be distorted.
Power and Direction
I understand the relationship between OOP and DDD like this:
OOP gives you the "power" to write terms and definitions, and DDD gave us the question of "what terms should we establish, and for whom?"
DDD directed that power toward the domain (business area). The concept of "Ubiquitous Language" is precisely a practice of establishing terms shared by the team and reflecting their definitions in code.
OOP: The "power" to write terms and definitions
↓ (can be distorted if direction isn't set)
DDD: Directs terms toward the "domain"
↓
Result: Domain perceptions are fixed as appropriate terms and definitions in code
This is also why I was drawn to DDD.
How "Writing Terms and Definitions" Relates to DDD
At this point, some might think, "'Programming is writing terms and their definitions' — isn't that just saying the same thing as DDD's Ubiquitous Language?" There's certainly overlap. But DDD is a practical methodology that says "domain experts and code should use the same words." What I'm trying to say is one layer deeper: the act of programming itself is inherently an act of writing terms and their definitions. I see DDD as providing practical answers — "Whose terms should we adopt?" and "How should we carve terms and definitions per context (Bounded Context)?" — built on top of that recognition.
The Difficulty That Remains
Even with DDD, the difficulty of drawing boundaries when turning perceptions into terms doesn't disappear. Ubiquitous Language tells us to "align with domain experts' words" and helps us draw boundaries. However, how to interpret domain experts' perceptions, at what granularity to carve terms, and how much to include in a single definition — these ultimately depend on the developer's judgment. OOP gives us the power to write terms and definitions. DDD directs that power toward the domain. But the judgment of where to draw boundaries is, no matter how far you go, dependent on human perception. Even looking at the same business domain, different people can produce different Orders. This is what makes design difficult, and at the same time, why teams need to align on terms and definitions.
Teams and Terms
So far, this has mainly been about how an individual developer writes terms and definitions. But in team development, another problem emerges. An individual's ability to write terms and definitions and a team's ability to share them are separate matters. Even if each person understands OOP and DDD, if everyone carves terms based on their own perceptions, both terms and definitions will diverge within the same codebase. This is the state where the shared foundation — one of the four powers of terms and definitions — has collapsed. If terms aren't shared, a team can't even point to the same concept in conversation.
Good design differs by team. The optimal term system for one team may not be optimal for another. But one thing can be said:
Aligning with vocabulary commonly used in the industry is an investment in team sustainability.
A proprietary term system might be efficient for people deeply familiar with that team's context. But when new members join and existing members leave, the more idiosyncratic the term system, the more fragile the team becomes. Using industry-standard vocabulary as a base increases resilience to team turnover.
Going a bit deeper, this is also about readability. Readability isn't just about having easy-to-read names. I think it's the state where the meaning expected from a term and the meaning its definition actually carries align without strain.
Summary
That's my personal OOP interpretation.
Programming, at least from a design perspective, is establishing terms and giving them definitions — that's what I believe.
OOP is an expressive form for writing perceptions as terms and definitions. It turns human perceptions into objects as terms, and gives those definitions through methods and properties.
DDD is a way of thinking that directs terms toward the domain. It tells us where to point OOP's power.
I wrote about why I'm drawn to programming in a previous article. This time, I've dug into the next layer — "what is being expressed."
Establish terms carefully, and describe the world with those definitions. That's all there is to it.
See you next time.
Afterword — What Happens to This Interpretation in the Age of AI?
Honestly, since I started using AI daily, I've thought many times, "This perspective might become outdated." Today's AI is powerful. It can handle code design and naming at a considerable level. Even without a human carefully thinking about "giving perceptions a term and writing its definition," AI can probabilistically derive plausible terms and definitions.
On the other hand, when giving instructions to AI, vague words produce vague results. "Make it nice" won't produce "nice" code. Or when AI generates code full of setters in an anemic model, can you notice "this is wrong"? That might still require the kind of perspective described in this article.
That said, what this article presented is just one way of looking at things. If AI can derive terms probabilistically, there's a possibility that AI itself could produce perspectives superior to this perception-based interpretation. When I re-read this article five years from now, will I think "This perspective still holds up" or "I've found a better way to think about it"? I don't know right now. That's exactly why I'm recording my current thinking here.
Loading comments...