3 Important Lessons in Software Development After a Decade in the Industry

Paul Edward Golez
8 min readJul 23, 2023

--

I will soon mark my ten years in the software development industry. I lay out below few valuable lessons that I learned in software development throughout my experience across different software projects.

Photo by Max Duzij on Unsplash

1. Your software doesn’t have to be perfect.

I’ve been on a number of greenfield projects and for a start, the stakeholders will always have an infinite list of features they want to get implemented. It’s natural to be that way; stakeholders are often visionaries and will always want to have the best product.

The problem is that we have to ground ourselves to reality: resources are limited. This includes budget, effort, and time. Of these resources, time is the most vital of them all as there is no way to retrieve wasted time.

Most startups would likely want to have their product launched as soon as possible. However if we load every single minute feature during the initial launch, it would significantly delay it. The more features one adds to software, the more complex it becomes — and the rise in complexity isn’t always linear; it can be more.

Photo by Austin Distel on Unsplash

So which features does one build then?

I use the Pareto principle, also known as the 80/20 rule, as my guide. It states that 80% of the output from a system is determined by only 20% of the factors.

In other words, you got to figure out a handful of features that would have 80% of the impact on the application software.

Say if my team starts a greenfield project, a ride-hailing application similar to that of Grab or Uber, I would start focusing on the problem this app is going to solve: matching passengers and drivers. If the schedule’s a bit tight, the initial product launch would only allow anonymous passengers and drivers to be connected. No authentication nor sophisticated payment systems would be implemented.

Why?

Because the initial launch is likely to be out there to test the waters. And in a competitive market, time is of the essence. The stakeholders would want to see if prospects are gonna use the product — commonly referred as the minimum viable product (MVP). If the MVP is a hit, the next set of features are then evaluated for implementation based on their potential impact. If users don’t adopt the MVP, then the stakeholders have minimized the cost for this learning.

The same principle can be applied when bug fixing: not all bugs are equal. Some bugs are disastrous, some are hard to fix, and some rarely occur. From experience, these are the three factors — impact, required effort, and statistics — that we consider when fixing a bug.

When bugs have low impact and are very unlikely to occur but requires tremendous effort, then I set it as low priority. Again, it is because resource is limited. The effort exerted on fixing a bug could have been used in other tasks with more impact — creating new features, fixing more critical bugs, or setting up automated tests .

As much as we want the software to be perfect, it is very unlikely to happen as resources are limited. What we do need to build though is the software just enough to be launched in a timely manner.

In doing so, a very important skill a developer must have is to evaluate the tasks and prioritize them based on impact.

Remember the 80/20 rule; not all tasks are created alike. We should always strive to work on those that will have the greater impact rather than putting our focus to every little bits and pieces of the software.

2. Your code doesn’t have to be optimized.

Programming is a craft and I believe we like to think of our code as a form of art to the point we attach ourselves to it. During my first few years in the field, I was always fond of applying the most sophisticated techniques in my code.

However after being in the field for a while, I’ve come to realize that 90% of the problems can be solved with the simplest approach. Simplicity is king!

In terms of coding…

Over-optimization usually starts on the most granular part — the code. To make the code execute just a little bit faster, we produce deeply nested loops and selection statements. In the process, we sacrifice a very important aspect of code: readability.

Readability refers to the fact that code should be easy to follow. When there are no other specific non-functional requirements needed, one must put readability first.

In the long run, you are likely to come back to the code you’re writing today for modification. Having all these complex statements nested together greatly contributes to cognitive load. It is harder to add new behavior while preserving the old ones to a system rather than creating them all back from scratch. Not to mention if you work on a team, another developer could work on that code next.

Hence for a start, always put readability first over optimized code unless it’s called for under certain situations. If your optimized version of the code would only speed up the execution for a few milliseconds or save a few bytes, it’s not a big loss when you have readable code.

In terms of architecture…

Microservices have been a trend in software development for a while now. Any passionate developer would love to work on a project having this architecture.

It’s easier for such developers to immediately architect new projects with microservices — as if we’ve finally found the right problem to this tool sitting on our toolkit for a while now. And then, we’ve fallen into the trap.

All these trendy architectures available to address scalability and resiliency comes with a cost: maintainability. As these lean on decomposing monoliths into small manageable components, the number of codebase and infrastructure to maintain explodes and these always come back to bite us back.

Not to mention that the interactions between these microservices introduces further complexity especially when domain boundaries aren’t established correctly.

It always is a good thing to be conservative and start with something simple. If you don’t have the estimated numbers yet for the usage of your application, I would suggest resorting to the monolith approach. What you can do then is to make sure that domains within your monolith are well-separated to minimize the pain of changing the architecture in the future, a modular monolith as they call it.

So is optimization is bad? Not at all. When you optimize make sure you are solving a problem that requires addressing — not just because we’re trying to address a problem that might occur and hasn’t appeared yet.

Above all else, I would always value simplicity over optimized code. Your software will evolve as long as it is being used. Change happens, so you probably would want your software to be manageable at the very least.

3. You don’t have to be a genius.

Most young developers fall into the mindset that to be a developer, you must be well-versed in the most efficient algorithms and data structures.

This is a big misconception. I don’t think one has to be well-adept in these details to be a great developer. You just have to be a problem-solver.

A problem-solver does not necessarily have to know the details of these algorithms; a problem-solver just has to know that the solution exists and in which specific situations that solution is applicable to.

One of the most challenging courses way back is “Data Structures and Algorithm Analysis”. It’s fully packed with hardcore coding and math. Likely, the criterion in passing this course is to be able to implement your own data structure which puts a lot of pressure to students.

Now that I’m working in the field, I didn’t have the need to implement my own data structures. There are tons of libraries available! Sometimes, these data structures are even primitive to the programming language itself.

Was learning them useless? No. My point is, you don’t have to worry if you can’t make these complicated stuff yourself. What you need to focus on is to know when is the best time to use these data structures based on your needs.

This is in accordance to the “don’t reinvent the wheel” principle. These little generic problems that you are encountering now were probably solved by previous developers and are accessible. Why not use them if it can save you a ton of time and effort?

Not all problems should be solved by automation; I mean they can be, but they shouldn’t . As much as technology has given us convenience, it’s not the silver bullet to everything. There are some tasks that can be automated, but we deliberately choose not to.

Why?

Because human intervention is required.

Why?

Because we still want to be on top of things.

Take for example production deployments. We can choose to start deployment once we’ve pushed to the main branch (sometimes master, it varies) and have it reflected in a few minutes.

But most of the projects I worked on don’t do this. Furthermore, they make it a little bit harder to deploy to production. Why? Because we want to avoid accidents!

Production deployment is a big thing. The worst case could lead to a significant business down time. We want to do things right once we push these changes. In this case, automating the entire deployment pipeline would NOT be helpful at all; it would be a catalyst for disaster. Sure, we can automate some parts of it — break it into chunks to minimize human errors — but not the entire process.

My key point is: being a problem-solver is being able to solve problems at the appropriate context. There are problems that can be solved, make it a bit convenient, automate it. But there are cases where we just don’t. We just don’t solve problems just for the sake of boosting our egos.

These are the distinctions I make between the geniuses and the problem solvers.

The genius would implement his own data structures; the problem-solver would look for available libraries and use it.

The genius would immediately start coding the task given to him. The problem-solver would think about the task given to him and figure out the reasons behind it. Only when he’s got the appropriate reasons would he start working on it.

To be a great developer is to be a great problem-solver; being a genius is just an add-on, not a feature.

Conclusion

As a recap, here are the three significant lessons I’ve learned through my experience in software development so far:

  • Just enough” software over the “perfect” software. Make sure you optimize the use of resources in your project to create a deliverable software because resources are finite and limited.
  • Simplicity over efficiency. Any active software is evolving; you will want to have your code maintainable over the long run. Optimize only when the need arises.
  • Problem-solving over exceptional intelligence. You don’t have to be a geek to develop software; you just need to be able to identify the correct problems to solve and be resourceful and creative enough to solve these.

Thank you for reading. If you enjoyed my article, follow me for more software development articles.

--

--

Paul Edward Golez
Paul Edward Golez

Written by Paul Edward Golez

Software engineer from Cebu, Philippines. I write because I can.

No responses yet