Rethinking Object-Oriented Programming as 'Defining Terms' — What Lies Before OOP Principles

Rethinking Object-Oriented Programming as 'Defining Terms' — What Lies Before OOP Principles

⚠️ Disclaimer: The writing of this article was assisted by generative AI, based on aka's ideas.

Hey, it's aka.

In a previous article, I wrote, "I plan to write separate articles about OOP, TDD, and DDD." So this time, I'm writing about OOP and DDD. However, this is not a textbook explanation of OOP — it's a personal interpretation.

In this article, I'll rethink OOP through the lens of "object = term". I'd be happy if you come away thinking, "Oh, that's an interesting way to look at OOP."

Programming Is Defining Terms

Before diving into OOP, let me share one premise.

Programming is the act of defining terms.

Since it's a programming language, this might seem obvious. Yet in practice, most people think of programming as "writing processes" rather than "defining terms." Of course, there are aspects like computation, control flow, state transitions, and I/O. But when we turn those into code, we define terms in the form of function names and class names.

In a program, the terms you define behave exactly as you defined them. The terms you chose work with the meanings you gave them. Let's look at a concrete example.

// In an order management system for an e-commerce site
class Order {
    private final List<OrderLine> lines;
    private final Money totalAmount;

    public void confirm() { ... }
    public void cancel() { ... }
}

This Order defines that "in this system, an 'order' is something that has line items, has a total amount, can be confirmed, and can be cancelled." Every property and method constitutes the definition of the term "order."

This term definition has several components:

So why should we care about term definitions? Because it's not just a coding style — it's the foundation of software quality. Term definitions have at least four powers:

  1. Cognitive compression. When you write order.confirm(), you can treat it as "confirming an order" without thinking about all the details of what happens inside. Complexity is contained within the term.
  2. Foundation for reasoning. Well-defined terms make it easier to reason about the next design decision. The thought "If Order has confirm(), shouldn't it also have cancel()?" comes naturally.
  3. Shared foundation. When terms are defined, the team can more easily refer to the same concept with the same word. Instead of "that process" or "that thing," you can have conversations using clear names.
  4. Change resilience. When terms are properly defined, the impact of changes becomes more predictable. "I want to change the order confirmation logic" leads you to look at Order.confirm() — it serves as a guide.

The impact on maintainability is particularly significant.

Terms need to be defined at appropriate granularity. Terms that are too coarse lose their function as terms. The Order above was a definition focused on a single phase — "an order that can be confirmed or cancelled." But what if we crammed responsibilities for every phase into this single Order? The cart phase where items are added, the confirmed phase after payment clears, the phases during and after delivery — what happens when we express all of these with a single Order?

order.addItem(item);       // Should only be callable in the cart phase
order.confirm();           // Only meaningful before confirmation
order.requestRefund();     // Only meaningful after fulfillment

You can't tell from the code alone which method should be called when. A new developer would have to trace internal status flags before understanding, "Ah, so that's what this Order means." If the terms had been separated from the start — Cart, ConfirmedOrder, CompletedOrder — the name alone would convey what state it's in and what can be done with it. This is why term definition directly impacts maintainability.

One note here: This premise is not specific to OOP. Whether functional or procedural, programmers define terms through function names and type names. However, different paradigms lead to different ways of carving terms.

For example, in functional programming, terms are defined through algebraic data types:

-- "OrderStatus" means one of: InCart, Confirmed, or Completed
data OrderStatus = InCart | Confirmed | Completed

-- "Order" is order lines plus a status
data Order = Order { orderLines :: [OrderLine], status :: OrderStatus }

-- "confirm" transforms an order into a confirmed order
confirm :: Order -> Order
confirm order = order { status = Confirmed }

data OrderStatus = InCart | Confirmed | Completed defines "an order status is one of: in cart, confirmed, or completed." In OOP, terms are defined by bundling properties and methods into classes; in FP, they're defined through types and functions over those types. The same concepts are carved out with different tools.

Let's compare with the OOP-side code:

// OOP-style: the order object owns state and changes itself
class Order {
    private OrderStatus status;

    public void confirm() {
        this.status = OrderStatus.CONFIRMED;
    }
}

In OOP terms, "the order changes its own state." In FP terms, "confirm is a transformation that produces a new confirmed order." The handling of state differs. But both are defining the same concepts — "an Order has line items and a status" and "confirm means changing an Order to a confirmed state" — as terms in code.

The Roles of OOP and DDD

If programming is defining terms, the next question becomes: How should those terms be defined?

When defining the term "order," what should it include? At what granularity should it be carved? There's too much freedom — "define terms" alone doesn't give you answers. That's where OOP and DDD come in as tools.

Layer 1: Programming is defining terms (paradigm-independent principle)
    "So how should those terms be defined?"
Layer 2: OOP and DDD serve as tools
    - OOP: An expressive form for writing perceptions as terms
    - DDD: A way of thinking that directs terms toward the domain

From here, I'll dig deeper into Layer 2 — how OOP and DDD help with term definition.

Why OOP Is Suited for Defining Terms

OOP (Object-Oriented Programming) is generally described as "a paradigm that bundles data and related operations into objects, composing programs through interactions between objects." Why is this object mechanism suited for defining terms? There are three main reasons:

  1. Objects make it easy to turn perceptions directly into terms
  2. Mechanisms like encapsulation and abstraction help maintain term boundaries
  3. Subject-verb relationships can be naturally fixed in code

Object = Perception Turned into a Term

In the general definition, an object is "something that bundles data and operations." But to me, an object is a human perception fixed as a term in code.

What's important here is that it's not "copying the real world as-is." What OOP captures is the result of how the real world was carved out. When humans look at reality and perceive "there's a boundary here" or "this is a coherent unit," that carving is what gets translated into code. If the system's purpose changes, the carving changes even for the same real world. I use the word "perception" here because this carving is not objectively unique — what constitutes a unit and where boundaries are drawn depends on the observer's position and purpose.

For example, suppose you're building a reservation system for a pet salon:

// "Dog" in a pet salon reservation system
class Dog {
    private final Name name;
    private final Owner owner;
    private final Breed breed;
}

This Dog is not a complete biological model of a dog. "What is a dog to a pet salon reservation system?" — it has a name, has an owner, and has a breed. There's no organ structure, genetic information, or bark(). Those aren't concerns of this system.

In other words, a term (object) is not a copy of something that exists, but a unit of perception carved out for a purpose.

OOP Principles Are Means, Not Essence

With this perspective, the positioning of OOP principles looks a bit different. Let's use a racing game as an example:

// "Car" in a racing game
interface Car {
    void accelerate();
    void brake();
    void steer(Direction direction);
}

// "Player": someone who drives a car in the race
class Player {
    private final PlayerId id;
    private Car car;

    public void drive(Direction direction) {
        car.accelerate();
        car.steer(direction);
    }
}

Car doesn't include vehicle inspection dates or insurance contract details. In a racing game, if it can accelerate, brake, and steer, it's a "car." And Player doesn't know Car's internal implementation. All it needs is the definition that "a car can accelerate, brake, and steer."

This is similar to the relationship between humans and cars. Humans can drive a car without knowing how the engine works. All that's needed is knowing how to interact with it. When you turn perceptions directly into terms, that relationship naturally appears in code.

What's happening here is close to the so-called OOP principles — encapsulation, abstraction, polymorphism, and inheritance. The fact that Player can drive without knowing Car's internals is encapsulation; the fact that the pet salon's Dog doesn't need genetic information is abstraction. In my interpretation, these are means, not essence. The essence is defining perceptions as terms. These principles are tools for handling those definitions well. If you mistake them for the essence, you end up with encapsulation for encapsulation's sake, inheritance for inheritance's sake. If you see them as means, "Are the terms well-defined?" becomes your criterion.

Let's look at another example:

// Common definition of a payment method: something that can pay
interface PaymentMethod {
    PaymentResult pay(Money amount);
}

class CreditCard implements PaymentMethod {
    public PaymentResult pay(Money amount) { ... }
}

class BankTransfer implements PaymentMethod {
    public PaymentResult pay(Money amount) { ... }
}

By making the perception "a payment method is something that can make payments" into a common definition, both CreditCard and BankTransfer can be treated as the same "payment method." Here too, OOP principles work as means for defining terms well.

OOP's Syntax Naturally Expresses Term Definitions

There's another reason OOP is suited for defining terms: its syntax.

// OOP-style: the term (object) acts by itself
order.confirm(paymentInfo);

// Procedural-style: the term (data) and the operation are separated
confirm(order, paymentInfo);

order.confirm() directly writes the perception "an order is something that can be confirmed." The subject-verb relationship is reflected in the code.

That said, confirm(order) isn't bad. It's just a different way of carving. However, in OOP, behavior belongs to the type. That means "confirm" is defined as part of Order, so the term definition includes where responsibility lies. In FP, behavior exists outside the type, so term definition and responsibility attribution are separated. When you want to fix a human perception, responsibility and all, to a subject, OOP's syntax feels natural — that's my sense of it.

What Happens When Terms Are Carved Poorly

OOP has the power to turn perceptions into terms. So what happens when terms are carved not by perception but by technical convenience?

The Problem with Setters as a Carving Approach

Setter culture is the most visible example of this misalignment.

order.confirm();

In this form, order reads as a term meaning "an order that can be confirmed." Subject and verb are connected, and you can see not just what's happening but what it's being treated as.

Now, what happens in a world where the term confirm() doesn't exist? For example, code like this appears inside a service class:

// A process buried somewhere in OrderService.java
order.setStatus(CONFIRMED);
order.setConfirmedAt(now);
payment.setAuthorized(true);

What's here is certainly processing. But it's no longer the cognitive unit of "confirming an order" — it's been decomposed into granular technical operations: changing a status, writing a timestamp, setting a flag.

What's lost here isn't just aesthetics. What's lost is the term itself.

What was originally understood as a single act — "confirming an order" — has been unraveled into "a collection of field updates" in the code. As a result, every time a reader encounters this code, they must mentally reconstruct:

In other words, the cognitive unit that the code would have provided the reader — if the term had been preserved as a term — disappears from the code's surface through decomposition. This loss of terms is precisely what tends to happen with setter/getter-centric definitions.

The same thing happens at even smaller scales:

// Set an email address
user.setEmail("new@example.com");

// Change an email address
user.changeEmail("new@example.com");

setEmail is a term carved by data structure convenience. Meanwhile, changeEmail turns the act of "changing an email address" directly into a term. In this form, rules like notification to the old email, change history recording, validation, and re-authentication can naturally be contained within it.

What Distorted the Way Terms Were Carved

So why did this kind of carving become widespread?

1. Persistence concerns flowed directly into models. ORMs strongly suggest the mapping "table column = class property." As a result, classes tended to be treated not as terms representing perceptions, but as projections of table structures.

2. JavaBeans conventions and frameworks made property-centric design the default. If you provide getters/setters, the framework reads them. The IDE auto-generates them. This convenience is significant. But it also makes it easy to develop the habit of creating classes as collections of properties before thinking about "what does this term mean?"

3. Division of labor tended to fragment meaning. Separating screen, API, and DB concerns into different structures is common in practice. But when each starts carving Order according to its own convenience, "what is an order, as a domain concept?" becomes nobody's responsibility.

In short, setter-centric design spread not because it was fundamentally superior, but because it rode well on technical and developer-experience convenience.

Where It Leads: The Anemic Domain Model

As setter-centric carving progresses, objects come to be defined not by "what they can do" but by "what they hold." Order becomes a mere box holding status and confirmedAt, and meaningful behavior like "confirm" gets pushed out to an external OrderService.

// Order only holds data
class Order {
    private OrderStatus status;
    private LocalDateTime confirmedAt;
    // setters/getters only...
}

// Behavior is buried in a service procedure
class OrderProcessingService {
    public void process(Order order, Payment payment) {
        order.setStatus(CONFIRMED);
        order.setConfirmedAt(now);
        payment.setAuthorized(true);
        notificationService.send(order.getEmail(), "Your order has been confirmed");
        // ...other steps continue
    }
}

This is the state Martin Fowler named the Anemic Domain Model. The object has data but no meaningful behavior. I interpret this metaphor of "anemia" as an object drained of its blood (= behavior), left empty.

The act of "confirming an order" dissolves into a large procedure called process, making it hard to read where "confirmation" begins and ends. Operations remain in the code. But terms don't. You can trace what's being done, but what it is becomes hard to see.

So would the solution be to cram all the logic scattered across services back into the Order class? Not quite:

class Order {
    public void confirm() { ... }
    public void cancel() { ... }
    public void sendConfirmationEmail() { ... }
    public void calculateTax() { ... }
    public void generateInvoicePdf() { ... }
    public void syncToExternalApi() { ... }
}

This is the opposite of anemia — bloat. Tax calculation, PDF generation, and external API sync are all crammed into the term "order." Just moving methods into the class doesn't fix things when the cognitive unit is broken. "What is an order?" becomes blurry.

What matters isn't where methods live. It's carving terms along the lines of cognitive units. For an anemic model, bring the act of "confirm" back into Order. For a bloated model, carve out tax calculation as "tax calculation" and PDF generation as "invoice generation" — each as its own independent term. Both follow the same principle: aligning cognitive units with term units.

Of course, for structures whose purpose is data transport — DTOs, ViewModels, persistence Entities — setters are natural. Where and how you carve is what matters most.

DDD Determined the "Direction" of Terms

What OOP Was Missing

OOP is a powerful paradigm with the "power to define terms." However, OOP itself has no mechanism for distinguishing "Are these terms for the domain? For the DB? For the framework?" It has power, but the direction isn't determined automatically. That's exactly why it can be distorted.

Power and Direction

I understand the relationship between OOP and DDD like this:

OOP gives you the "power" to define terms, and DDD gave us the question of "which terms should we define, and for whom?"

DDD directed that power toward the domain (business area). The concept of "Ubiquitous Language" is precisely a practice of defining terms shared by the team and reflecting them in code.

OOP:  The "power" to define terms
       ↓ (can be distorted if direction isn't set)
DDD:  Directs terms toward the "domain"
Result: Domain perceptions are fixed as appropriate terms in code

This is also why I was drawn to DDD.

The Relationship Between Term Definition and DDD

At this point, some might think, "'Programming is defining terms' — isn't that just saying the same thing as DDD's Ubiquitous Language?" There's certainly overlap. But DDD is a practical methodology that says "domain experts and code should use the same words." What I'm trying to say is one layer deeper: the act of programming itself is inherently an act of defining terms. I see DDD as providing practical answers — "Whose terms should we adopt?" and "How should we carve models per context (Bounded Context)?" — built on top of that recognition.

Teams and Terms

So far, this has mainly been about how an individual developer defines terms. But in team development, another problem emerges. An individual's ability to define terms and a team's ability to share those terms are separate matters. Even if each person understands OOP and DDD, if everyone carves terms based on their own perceptions, definitions will diverge within the same codebase.

Good design differs by team. The optimal term system for one team may not be optimal for another. But one thing can be said:

Aligning with vocabulary commonly used in the industry is an investment in team sustainability.

A proprietary term system might be efficient for people deeply familiar with that team's context. But when new members join and existing members leave, the more idiosyncratic the term system, the more fragile the team becomes. Using industry-standard vocabulary as a base increases resilience to team turnover.

Going a bit deeper, this is also about readability. Readability isn't just about having easy-to-read names. I think it's the state where the meaning expected from a term and the meaning the implementation actually carries align without strain.

Summary

That's my personal OOP interpretation.

Programming, at least from a design perspective, is defining terms — that's what I believe.

OOP is an expressive form for writing perceptions as terms. It fixes human perceptions as objects, making it easy to place meaning and behavior close together within terms.

DDD is a way of thinking that directs terms toward the domain. It tells us where to point OOP's power.

I wrote about why I'm drawn to programming in a previous article. This time, I've dug into the next layer — "what is being expressed."

Define terms carefully, and describe the world with those definitions. That's all there is to it.

See you next time.


Afterword — What Happens to This Interpretation in the Age of AI?

Honestly, since I started using AI daily, I've thought many times, "This perspective might become outdated." Today's AI is powerful. It can handle code design and naming at a considerable level. Even without a human carefully thinking about "turning perceptions into terms," AI can probabilistically derive plausible terms and concepts.

On the other hand, when giving instructions to AI, vague words produce vague results. "Make it nice" won't produce "nice" code. Or when AI generates code full of setters in an anemic model, can you notice "this is wrong"? That might still require the kind of perspective described in this article.

That said, what this article presented is just one way of looking at things. If AI can derive terms probabilistically, there's a possibility that AI itself could produce perspectives superior to this perception-based interpretation. When I re-read this article five years from now, will I think "This perspective still holds up" or "I've found a better way to think about it"? I don't know right now. That's exactly why I'm recording my current thinking here.

Top