In October this year I came across several articles written by Yegor Bugayenko called What's wrong with OOP, MVC vs. OOP and SOLID Is OOP for Dummies in which a person calling himself Hall of Famer made certain statements which I deemed worthy of comment. Below is a summary of the exchanges which followed.
In this post Hall of Famer said the following:
There is something very wrong with procedural programming, with procedural programming you always end up with amateurish spaghetti code.
Having spent over a decade using that well-known procedural language called COBOL, and having being on a Jackson Structured Programming (JSP) course and learned about Modular Programming and Structured Programming I considered his statement to be flawed, so I countered his view with this reply in which I stated the following:
I disagree. The opposite of spaghetti code is structured code, and it is possible to write structured code in a procedural language just as it is to write spaghetti code in an OO language.
Hall of Famer then replied with this:
Nope it is not, procedural programming always leads to unstructured spaghetti code. If you write in a procedural language with modular design approach, it is actually OO design. Procedural design is spaghetti code, theres no way to write good procedural code. Again note the difference between procedural syntax and procedural design. With C you have to stick with procedural syntax with no OO support natively, but the design approach still can be OO.
I responded with this:
Did you come up with that idea yourself, or can you point to any articles which support that claim? The definition of spaghetti code includes the word "unstructured", so procedural code which is properly structured is NOT spaghetti code. Procedural code which is structured is NOT OO design as OO requires encapsulation, inheritance and polymorphism. Structured programming, as taught by Michael Jackson, is based on sequence, iteration and selection. Besides, OO design did not exist in the 1970s and 80s when I was using COBOL.
He immediately came back with the following:
Not really, many people have thought of this in the past. The OO design approach really, doesn't necessarily require OO syntax. It is a methodology, and the OO mindset is far more important than OO syntax itself. Just because you use objects doesn't mean you write OO code, procedural code encapsulated in objects ain't OO code.
And nope, there is no way to write properly structured procedural code. Procedural code = spaghetti code, you again had a confusion between procedural code and code written in procedural language. By properly structured I mean application-level design and architecturing, not just writing structures such as control structures, loops and functions. Just because your application has a structure, doesn't mean its 'properly' structured.
I could not let such ridiculous statements go by, so I replied with the following:
The OO design approach really, doesn't necessarily require OO syntax.You are confusing OO design with OO programming. A design is, or should be, language agnostic as it describes the application before it is built. A good design should be able to be implemented in any language, procedural or OO.
OO programming can only be achieved with a language which directly supports the OO principles of encapsulation, inheritance and polymorphism. If you use those features then your code is OO. If you don't use those features then your code is not OO.
procedural code encapsulated in objects ain't OO codeAll code is executed in a linear fashion whether it is in a procedural function or a class method, so there is no effective difference between procedural code and OO code when it is being executed. OOP requires the use of specific features of the language as described above.
there is no way to write properly structured procedural codeThere is no such thing as "properly structured", there is either "badly structured" or "well structured". "Properly structured" is an entirely subjective description which means different things to different people. Spaghetti code is code which either has no structure at all, or it is badly structured. It is possible to write COBOL code which is well structured just as it is possible to write Java code which is badly structured spaghetti.
you again had a confusion between procedural code and code written in procedural languageI disagree. If you write code in a procedural language it is procedural. If you write code in an OO language and implement encapsulation, inheritance and polymorphism then your code is OO.
Let me expand on some of the points he has made so far:
If you write in a procedural language with modular design approach, it is actually OO design
I disagree. Modular programming was practiced in procedural languages long before OOP and OOD came on the scene. OO design requires the use of Object Oriented Concepts such as Objects/Classes, Information hiding, Inheritance, Interfaces and Polymorphism. Modular design does not, so to say that the two are the same is plainly wrong.
The OO design approach really, doesn't necessarily require OO syntax
I disagree. If you look at the description of Object Oriented Concepts in OOD you will see that it requires support for OO concepts in the language. None of these concepts existed in the COBOL versions that I used, so I cannot see how it is possible to use OOD for a language that does not support OO concepts.
It is a methodology, and the OO mindset is far more important than OO syntax itself.
I disagree. OOP is not a state of mind, it is an implementation of OO concepts. You cannot implement OO without OO syntax, and if you use OO syntax in your programs then it is OOP.
Just because you use objects doesn't mean you write OO code
I disagree. If you write programs which are oriented around objects then you are, by definition, doing Object Oriented Programming. It may not be the most efficient or the most effective, but it is still OO.
Code which has a structure is not 'properly' structured
I disagree. Either code has a discernible structure which can be shown in a structure chart or it does not. It is either spaghetti code (or ravioli code in the case of OOP) or it is not. This is a boolean condition in which the result is either TRUE or FALSE, YES or NO. There is no "yes it is but no it isn't".
This person does not understand the difference between "procedural" and "object oriented". There is a large amount of functionality which exists in both. OO code is exactly the same as procedural code except for the addition of encapsulation, inheritance and polymorphism. Both paradigms have lines of code containing statements which are executed in a linear fashion. Both paradigms support expressions, operators, control structures, built-in functions and user-defined functions. Both paradigms support the concept of Modular Programming and Structured Programming. One paradigm supports encapsulation, inheritance and polymorphism while the other does not.
In my reply I also responded to a previous question of his:
Oh yeah, do you happen to be the same Tony Marston who made a fool of himself on SitePoint some time ago?I have often been criticised for my different views, which have sometimes been described as heretical, but I am allowed to have a different view, just like everyone else on this planet. I do not care that my views are different as I write software which pleases my paying customers, not a bunch of ignorant or confused developers.
He followed up with this reply:
I see, so you really are that Tony Marston who keeps embarrassing himself. Of course you are allowed to have different viewpoints, different doesn't necessarily mean bad. But your viewpoints are mostly incorrect, inferior and confused, yet you call the other developers ignorant and confused developers, when you are exactly ignorant and confused yourself. People criticize you not because you have those incorrect opinions, but that you try to convince them that your opinions are good when they clearly are bad. This is exactly why everyone was against you in that Singleton vs Dependency Injection thread.
For those of you who are in the dark, my opinions on dependency injection were published in Dependency Injection is EVIL. If you read it carefully you will notice that I actually say that DI can be beneficial in appropriate circumstances, but that it should not be applied indiscriminately. In my applications there are places where I DO use DI, but there are also places where I DO NOT.
I responded to his post with this:
I see, so you really are that Tony Marston who keeps embarrassing himself.I am not embarrassed in the least. I find the criticism of my work to be very amusing. I sometimes laugh so much I can feel the tears running down my trouser leg.
Of course you are allowed to have different viewpoints, different doesn't necessarily mean bad.How very kind of you. It makes a change from being told that my opinions are bad simply because they are different.
But your viewpoints are mostly incorrect, inferior and confusedThen how come the code which I write using my "incorrect, inferior and confused" methods still produces applications which not only work, but work very well. All without the use of those abominations called dependency injection and object-relational mappers.
In this post I made the following statement:
I said your opinions are bad because your opinions are incorrect, outdated and confused, not because they are different.My opinions are not incorrect for the simple reason that the code which I create most definitely works, and has done for over a decade. My opinions are not outdated simply because I refuse to accept all the add-on definitions to OOP which have been dreamt up by a bunch of pseudo intellectuals. OOP consists of nothing more than encapsulation, inheritance and polymorphism, and everything else is an optional extra.
I have made a list of the OO add-ons which I ignore in A minimalist approach to Object Oriented Programming with PHP.
He made yet more erroneous statements in this reply to which I responded with the following:
Well just because your code works for you, doesn't mean your opinions are correct.If my methods produce cost-effective software which works then those methods cannot be wrong.
Your code works for your legacy applications/frameworksIf by "legacy" you mean "mature" and "proven" then that is precisely what my customers want. Large corporations do not want leading edge, bleeding edge, immature and unproven applications, they want something with a pedigree. That is what I provide.
your legacy PHP 4 applicationFYI it currently runs on PHP 7, and has run through all versions of PHP 5. It is as current as it needs to be.
And those are NOT add-on definitions of OOP, they are fundamental and universally agreed concepts.I suggest you look up the dictionary definition of "fundamental". In http://dictionary.cambridge.org/dictionary/english/fundamental it is described as "forming the base, from which everything else develops". The original definition of OOP by Alan Kay (who invented the term) specifies nothing more than encapsulation, inheritance and polymorphism, so everything which came after that IS an addition and IS optional. The fact that some people consider that those optional extras are an essential part of OOP just shows that it is THEY who are confused, not me.
In fact, the more confused, old-fashioned and incompetent you are, the more counter-examples I will have when I teach people how to avoid bad habits. In a way, I should thank you for this.And I should thank you for encouraging techniques that will prevent any programmer from creating software which will be a serious competitor to mine. The true purpose of a software developer is to develop software which impresses the paying customer with its cost-effectiveness, not its ability to impress a bunch of developers with its fashionable yet overly-complex implementation. I sell my enterprise application to large corporations all over the world, so next to that monster which you and your cohorts would produce my software would appear sleek and slim.
In this post he made the following statements:
Its funny that you referenced Alan Kay when you did not even understand what Alan Kay's vision of Object and OOP are. Alan Kay's idea for Objects are actor models, which communicate with each other by sending messages. Even with the 3 basic characteristics of OOP you still fail miserably. Your singleton breaks encapsulation, your inheritance is totally wrong when you inherit 9000 lines of base class, and your polymorphism is nowhere to be found. Either way you are confused about OOP and you are at the bottom of your heart, a procedural programmer.
I asked him to provide a link to any article written by Alan Kay in which he said that OOP is about sending messages, but he could not.
I asked him to provide a link to any article which explained how a singleton could possibly break encapsulation, but he could not.
In this post he made some more questionable statements:
Yes, your singleton breaks encapsulation, it always does. When you use singleton, the object stored as singleton becomes a glorified global variable. When your client code uses singleton, it becomes a hidden dependency that cannot be tested or maintained. Yes, your inheritance is done wrong for that 9000 lines of God class, because your parent class breaks SRP and your child classes are mere dumpers for the garbage from this god parent class. And your polymorphism is indeed nowhere to be found, because you have done inheritance wrong in the first place, and blindly giving all responsibilities to all child classes ain't the right way to do polymorphism.
How so? In OOP inheritance is the primary method for sharing reusable code. In my main enterprise application there are over 400 database tables for which I have a separate class. Each class contains the business rules for a single designated database table. There is a lot of code which I could duplicate in each table class, but instead I moved all that code into an abstract table class which is then inherited by every concrete class. There is no limit to the amount of code which can be shared this way. How can this possibly be the "wrong" use of inheritance? There is NO limit on the amount of code which an abstract class may contain. In fact, if you looked at the objectives of OOP you would see that it actually encourages the creation of more reusable code, not less.
In his article Object Composition vs. Inheritance the author Paul John Rajlich wrote the following:
Most designers overuse inheritance, resulting in large inheritance hierarchies that can become hard to deal with. Object composition is a different method of reusing functionality.
However, inheritance is still necessary. You cannot always get all the necessary functionality by assembling existing components.
The disadvantage of class inheritance is that the subclass becomes dependent on the parent class implementation. This makes it harder to reuse the subclass, especially if part of the inherited implementation is no longer desirable. ... One way around this problem is to only inherit from abstract classes.
Guess who doesn't overuse inheritance by creating large inheritance hierarchies? Me!
Guess who avoids this problem by only inheriting from abstract classes? Me!
So if I AM NOT doing what this author suggests is bad, and I AM doing what he suggests is good, how can you possibly say that I am wrong?
By having each concrete table class inherit from a single abstract table class this allows me to make extensive use of the Template Method Pattern which was described in the Gang of Four book as being "a fundamental technique for code reuse". If increasing the amount of reusable code is one of the aims of OOP, and I am using a well-known design pattern which achieves this, then how can you possibly say that I am wrong?
Then you obviously haven't looked very far. Polymorphism becomes available when the same method signature is shared by many different objects, which then allows a piece of code which calls that method to work on any object which implements that method. In my framework every one of my page controllers calls methods on each Model object which are defined in the abstract table class. As I pointed out in the previous section that abstract class is inherited by every one of my 450 table classes, which means that every one of my 40 Controllers can be used with every one of my 450 Models. How can this situation NOT be described as polymorphism?
Now do the maths - if I have 40 controllers each of which can be used on any of my 450 Model classes then that results in 40 x 450 = 18,000 opportunities for polymorphism. Do you see that - EIGHTEEN THOUSAND instances of polymorphism which you failed to spot. Are you blind, or just intellectually challenged?
In this post he came up with the following:
Yes, your singleton breaks encapsulation, it always does. When you use singleton, the object stored as singleton becomes a glorified global variable. When your client code uses singleton, it becomes a hidden dependency that cannot be tested or maintained.
I'm sorry to burst your bubble, but the two terms are NOT connected:
Defining a class, instantiating a class and storing a single instance of a class are different and unrelated activities.
In this post he said:
With Singletons, you kill polymorphism since its impossible to swap implementation of this class.
I countered that argument with the following:
Rubbish. Take a look at the following code:<?php $class_name = "foo"; $object = singleton::getInstance($class_name): $result = $object->getData(); ?>Here I can change the contents of the variable
$class_name, obtain a singleton of the associated object, then call a method on that object without any problem whatsoever. I have been doing just that in my framework since 2005, so don't tell me it can't be done.
As you should be able to see, provided that the specified class contains the
getData() method (which every one of my concrete table classes does by virtue of the fact that they all inherit from the same abstract table class which contains that method), this code will work.
I'm sorry to rain on your parade, but the two terms are NOT connected:
Instantiating a class into an object and calling an object method are different and unrelated activities.
In this post he wrote:
MVC on its own is not a complete architectural pattern, it is rather a subpattern suited to the UI layer.
I disagree, and so do the authors of all the articles I have read about this pattern. The View and Controller belong in the UI/Presentation layer, and the Model clearly belongs in the Business/Domain layer. The main reason that database access was not separated out of the Model was either because the application did not access a database (it may have been used to manipulate an image, for example), or that the thought of being able to switch from one database to another was not regarded as an option worthy of serious consideration.
MVC on its own is an example of a 2-Tier Architecture. When you combine it with the 3-Tier Architecture you get the structure shown in Figure 1. This could be called MVCD, or Model-View-Controller-DAO.
In this post he said the following:
MVC in general is an incomplete architectural pattern that only distinguishes three components in your application, but not all. In a smaller and simpler application, sure M, V and C themselves alone are sufficient. In more complex applications, you need more components, such as DAO, Service Objects, Helper Objects, etc.
I disagree. I have been writing database applications in PHP since 2003, and my application has grown from 35 tables to over 400 tables. That growth has not necessitated any new layers/tiers, just an increase in the number of objects in the existing layers:
In my application all business/domain logic is split across the numerous Model classes, with each Model containing only those business rules which apply to that entity. I do not put business rules into separate service or helper objects as that would break encapsulation.
I disagree. The definition of a God object contains the following:
A common programming technique is to separate a large problem into several smaller problems (a divide and conquer strategy) and create solutions for each of them. Once the smaller problems are solved, the big problem as a whole has been solved. Therefore a given object for a small problem need only know about itself. Likewise, there is only one set of problems an object needs to solve: its own problems.
In contrast, a program that employs a god object does not follow this approach. Most of such a program's overall functionality is coded into a single "all-knowing" object, which maintains most of the information about the entire program, and also provides most of the methods for manipulating this data. Because this object holds so much data and requires so many methods, its role in the program becomes god-like (all-knowing and all-encompassing). Instead of program objects communicating among themselves directly, the other objects within the program rely on the single god object for most of their information and interaction. Since this object is tightly coupled to (referenced by) so much of the other code, maintenance becomes more difficult than it would be in a more evenly divided programming design. Changes made to the object for the benefit of one routine can have unintended effects on other unrelated routines.
A god object is the object-oriented analogue of failing to use subroutines in procedural programming languages, or of using far too many global variables to store state information.
Note the use of the words "most of a program's overall functionality" and "single object". The class in question does not actually create a single object. It is an abstract class, and therefor cannot be instantiated into an object. It is inherited by every one of my table classes (I currently have 400, but this number increases when I add new functionality) in order to provide standard code for accessing any unspecified database table. Each concrete class inherits all the standard code from the abstract class and need only contain code which is specific to that table. Note also that all the methods in my abstract class are non-abstract, which means that they contain implementations as well as signatures, and I don't have to define them in any concrete class unless I want to override their implementations.
Saying that my abstract class is a God object simply because of its size proves nothing except that you can count, but that you either cannot read or cannot understand what you read. My abstract class does not exhibit the characteristics or symptoms of a God class as described in that article, so your accusation is without substance. It may have 9,000 lines, but that includes blank lines and comments. These are split across 254 methods, so that gives an average of about 35 lines per method. These methods can be categorised as follows:
If you don't understand how these methods are used at runtime then take a look at the diagrams in UML diagrams for the Radicore Development Infrastructure.
My abstract class may have more methods than you are used to, or more than you can possibly imagine, but that's simply because your imagination is smaller than mine, your capabilities are smaller than mine, and your applications are smaller than mine. Or, to put it another way, compared to mine your imagination is puny, your intellect is puny, and your applications are puny. I develop man-sized applications for the enterprise, you develop toys for boys. If you don't have the mental capacity to deal with a class which has that number of methods, then how can you have the mental capacity to deal with the same number of methods split across multiple classes? You are not increasing the readability of the code, you are replacing highly cohesive code with ravioli code, which makes navigation through the code for maintenance purposes more difficult.
The true definition of a god class contains the phrase "Most of such a program's overall functionality is coded into a single 'all-knowing' object". While you may think that 9,000 lines is a lot, it is just a small part of the 53,000 LOC that exist in my reusable library. This means that my abstract class contains 9/53rds or 17% of the overall functionality. I don't know who taught you maths, but 17% cannot be described as "most" in anybody's language.
One of the characteristics or symptoms of a true God class is that if the application expands then the God class expands with it. The application cannot be expanded without amending the God class. This situation simply does not exist my framework. In the past 10 years my enterprise application has expanded from 35 database tables to over 400, and as my so-called God class does not contain any references to any tables it is totally unaffected by their number. When I add a new table to my application all I do is create a new table class which inherits from, not expands that so-called God class. I then create as many user transactions as I want from my library of Transaction Patterns, each of which will reference that table class using generic methods which were defined in that so-called God class. Did you read that? Each new table class inherits from the so-called God class, it does not cause it to expand in the slightest. So if that class does not expand when the application expands it does not suffer the symptoms of a God class, therefore it does not deserve to be called one.
You may be confused by the fact that in order to implement inheritance you have to use the word "extends" in your code, as in
concrete_class extends abstract_class, and this word can have different meanings depending on the context. For example, if you "extend" a house by adding on a conservatory, a new wing or new floor, the end result is that the house itself becomes bigger. This is not what happens in OOP. The word "extends" should have been replaced with "inherits" so that what you see is that the original abstract class is completely unchanged, and that what you have actually done is create a totally new class which incorporates a copy of the abstract class, but with some additions. To continue with the house analogy, the original house is unchanged, but what you end up with is a copy of that house with the addition of a conservatory, wing, or floor.
This sort of confusion is not new in the software world. Do you remember Hungarian Notation? This was invented by a Microsoft programmer called Charles Simonyi, and was supposed to identify the kind of thing that a variable represented, such as "horizontal coordinates relative to the layout" and "horizontal coordinates relative to the window". Unfortunately he used the word type instead of kind, and this had a different meaning to those who later read his description, so they implemented it according to their understanding of what it meant instead of the author's understanding. The result was two types of Hungarian Notation - Apps Hungarian and Systems Hungarian. You can read a full description of this in Making Wrong Code Look Wrong by Joel Spolsky.
In this post Hall of Famer made the following statement:
I was saying that, there may be some child classes of your Default_Table that will never use functionality such as file uploading. It is not that the functionality 'may be' used, its never used. You said that you have controllers such as FileUploadingController, PDFController and CSVController. But then there are some child classes that do not support File Uploading, PDF/CSV conversion, then the functionalities from your Default_Table abstract class are completely useless to these child classes. This is completely opposite to what OOD is meant to be, how is it even considered an 'abstract' class if it tries to do everything? It looks more like a Utility class to me lmao. Your confusion arises from that you believe the methods are appropriate because they are used by one of the controllers, but this is not the correct criterion. Your controllers use child classes of Default_Table, not Default_Table itself. So if the methods are not used by certain child classes, it means the child classes should not inherit this method, and your God class is doing too much.
There is no rule which supports your ridiculous claim. Can you provide a URL which describes this rule? No, I thought not. If there is no such rule then I am perfectly within my rights to ignore it, and I choose to exercise that right.
You seem to be a follower of the traditional approach whereby you have one Controller for each Model, where you have to build each Model class by hand, then build - again by hand - a Controller which can access that Model. In such a method it is possible for the Model to have only those methods that are needed by that Controller, and for the Controller to actually use all of those methods.
I call this the "neanderthal approach" as it fails to meet one of the primary objectives of OOP which is to create more reusable code. There is no reusability if you have to build each Model and each Controller by hand. It also means that if you want to add extra functionality at a later date you have to modify both the Model and the Controller.
My method is far superior. I do not have to build any Models by hand as they are generated by my Data Dictionary. Each concrete table class starts off by inheriting all the standard code in the abstract table class, with a few lines in the class constructor to identify the table's name and its column details. I do not have to write any code to validate user input before it is sent to the database as that is handled automatically within the framework. I do not have to build any Controllers by hand as my framework contains a library of pre-built and reusable Controllers, with a separate controller for each Transaction Pattern. When I want to create a user transaction which does something with particular database table I simply go to my Data Dictionary, select that table, select a Transaction Pattern then press a button. This will generate one or more component scripts and add records to the MENU database which allow that/those user transactions to be run immediately.
Note these differences between your neanderthal approach and mine:
My approach requires less effort, which means that it makes me far more productive. Your method is a relic of the stone age and is unfit for the 21st century.
I disagree. The Single Responsibility Principle (SRP), also known as Separation of Concerns (SoC), states that an object should have only a single responsibility, and that responsibility should be entirely encapsulated by the class. All its services should be narrowly aligned with that responsibility. But what is this thing called "responsibility" or "concern"? How do you know when a class has too many and should be split? When you start the splitting process, how do you know when to stop? In his article Test Induced Design Damage? Robert C. Martin (Uncle Bob) provides this description:
How do you separate concerns? You separate behaviors that change at different times for different reasons. Things that change together you keep together. Things that change apart you keep apart.
GUIs change at a very different rate, and for very different reasons, than business rules. Database schemas change for very different reasons, and at very different rates than business rules. Keeping these concerns (GUI, business rules, database) separate is good design.
I don't know about you, but I recognise the separation of GUI, business rules, and database access as the description of the 3-Tier Architecture, upon which my framework is based. Not only that, because I have also split my Presentation layer into two smaller components, a Controller and a View, it is also an implementation of the Model-View-Controller design pattern. This produces the structure shown in Figure 1:
Figure 1 - The MVC and 3-Tier architectures combined
My abstract class is inherited ONLY by Model classes, and contains absolutely NO logic which rightly belongs in a Controller, View or DAO, so for you to say that it does too much and therefore breaks SRP cannot be supported by the facts. It does a lot, and perhaps it does more than you are used to, but that is no reason to assume that it breaks SRP. Besides, if I were to break that abstract class into smaller units I would be breaking encapsulation and the concept of cohesion, and those would be mistakes of far greater magnitude.
You should also note that SRP is all about the separation of logic, not the separation of information. Logic is code while information is data. Taking the data for an entity and splitting it across several classes would violate encapsulation and would therefore be wrong. You should also note that you should not split the logic across several objects unless you can give that logic a sensible (and short) name and fit it into a structure diagram as shown in Figure 1. Such terms as "Service" and "Helper" would be meaningless. If your method produces a structure that has so many objects that you cannot fit it into a single page diagram, or you cannot give each object a meaningful name, then you have probably gone too far. Splitting a monolithic piece of code into smaller parts is a good idea, but you should have the intelligence to know what to separate out and when to stop separating. The idea that you should extract until you drop I consider to be too ridiculous for words.
There follows a long exchange of argument and counter-argument which proves nothing except for the fact that we disagree on almost everything. He then adds to his list of stupid arguments in this post in which he said the following:
I already looked at your application architecture, and although it breaks down into 3 tiers, each tier of your application still contains a handful of god classes that do too much.
Excuse me, it is simply not possible for an application to have more than one God class. The definition of a God class clearly states "Most of such a program's overall functionality is coded into a single 'all-knowing' object". The word "single" should tell you that there cannot be more than one. The word "most", which equates to "more than 50%", should tell you that there cannot be multiple objects each containing more than 50% of the code. It is simply not mathematically possible to have multiple god objects. I don't care who you are, you are not allowed to redefine words, especially simple words such as "single" and "most", in order to promote your personal agenda.
How can I possibly say such a thing? The principle of encapsulation can be described as follows:
The act of placing an entity's data and the operations that perform on that data in the same class. The class then becomes the 'capsule' or container for the data and operations.
Note that this requires ALL the properties and ALL the methods to be placed in the SAME class. Breaking a single class into smaller classes so that the count of methods in any one class does not exceed an arbitrary number is therefore a bad idea as it violates encapsulation and makes the system harder to read and understand. It would also decrease cohesion and increase coupling which would be the exact opposite of what should be achieved.
As you should be able to see, if I moved some methods to another class then I would be ignoring the ALL part of that description, and as far as I am concerned that cannot be justified under ANY circumstances.
There are some numpties out there who create artificial rules such as "a class should not have more than N methods" and "a method should not have more than N lines of code" where N is a totally arbitrary number (usually 10, but I have seen lower). If I were to follow this "advice" and split the 138 public methods across 13 different classes then what would be the result?
This situation would therefore be adding to the maintenance burden instead of alleviating it, so should not be recommended by any sane person at all.
SRP makes no mention of size, so I regard any such limitation as artificial and feel no obligation to be bound by it.
How can I possibly say such a thing? The principle of cohesion can be described as follows:
Cohesion is a measure of the strength of association of the elements inside a module. A highly cohesive module is a collection of statements and data items that should be treated as a whole because they are so closely related. Any attempt to divide them would only result in increased coupling and decreased readability.
Simply put, this states that functions/methods which are related should be contained within the same class, and that a class should only contain functions/methods which are related. Putting all functions into a single class would be just as wrong as putting each function into a separate class. The correct grouping of functions requires a modicum of intelligence, and all evidence points to the fact that this is missing in the greater programming community. Only the intelligent few seem to have this ability, while the remainder are nothing more than Cargo Cult programmers who are going through the motions and assuming that what they are doing actually works as intended because they are incorporating all the right buzzwords and jumping on the same bandwagon as all the other lemmings.
As you should be able to see, if I moved some methods to another class then I would be decreasing cohesion and increasing coupling, and as far as I am concerned that cannot be justified under ANY circumstances.
SRP makes no mention of size, so I regard any such limitation as artificial and feel no obligation to be bound by it.
In this post he said the following:
And nope, you don't understand SOLID principles. You don't even understand SRP, or maybe you understand SRP but breaks it anyway. All the principles exist for good reasons, because it makes development easier, faster and more cost-effective.
This is where I have to strongly disagree. I have read many articles which supposedly show how a particular principle can be used, but all I see is the extra code that needs to be written for no apparent benefit. When I say "no apparent benefit" I mean that that the code still does what it did before the principle was applied, but with more code, with more levels of indirection. More code means longer to write, longer to run, longer to read and therefore more difficult to understand and maintain. When you can show me a principle which results in less code being written I will sit up and take notice, but until then I shall treat it as a waste of time.
In my long career in software development I have come across many ideas put forward by many people as being the silver bullet that will solve all the current problems, but the biggest problem with these ideas is that they are usually badly written, are too vague and imprecise, and are open to interpretation and therefore mis-interpretation. The levels of mis-interpretation can vary between "extreme" and "perverse". The first big problem is therefore which interpretation do you choose? Is it the "moderate" or "extreme" version? When do you apply the idea? When do you stop applying the idea? This leads me to the following observation:
There are two ways in which an idea can be implemented - intelligently or indiscriminately.
Those who apply an idea or principle indiscriminately, who apply it in inappropriate circumstances, or who don't know when to stop applying it, are announcing to the world that they do not have the brain power to make an informed decision. They simply do it without thinking as they assume that someone else, namely the person who invented that principle, has already done all the necessary thinking for them. This leads to a legion of Cargo Cult programmers, copycats and buzzword programmers who are incapable of generating an original thought.
This is a great mistake, and one that I learned NOT to make many years ago. I will only implement those ideas that have proven, in my personal experience, to be beneficial or have merit. This has led me to ignore a large number of ideas which have been approved by the Paradigm Police/OO Taliban and to follow instead an older, more mature set of ideas which, due to their unfashionable nature, have been described as "unorthodox" at best or "heretical" at worst.
When it comes implementing a solution, and there is choice between two alternatives - one simple, one complex - I will always follow the KISS principle and stick with the simplest solution. This follows on from the following statement made by C.A.R. Hoare:
There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies.
I have expanded this statement into the following:
Any idiot can write complex code than only a genius can understand. A true genius can write simple code that any idiot can understand.
The mark of genius is to achieve complex things in a simple manner, not to achieve simple things in a complex manner.
Despite the fact that I have been using my heretical approach for over ten years there are people out there who dismiss it out of hand as being rubbish, not because they have examined it and compared it with their own implementation, but because they have been told that I do not follow the "right" OO standards therefore everything I do must surely be wrong. Being wrong it must be bad, and being bad it must exhibit the characteristics of bad software such as being unreadable, difficult to maintain, take longer to write, longer to debug, et cetera, et cetera. This is a premise which leads to a conclusion. I shall now demonstrate that the conclusion is totally wrong which therefore proves that the original premise must also be wrong.
The fundamental point of my technique is that it enables me to create basic components to maintain the contents of database tables WITHOUT WRITING ANY CODE WHATSOEVER. This is in direct contrast to the volumes of code that need to be written just to comply with those academic yet impractical principles that are so loved by the Paradigm Police and the OO Taliban.
Throughout the exchange of views between myself and Hall of Famer in What's Wrong With Object-Oriented Programming? and also MVC vs. OOP he has continuously proved that he cannot understand simple concepts and keeps on insisting that any opinion which is different from his is automatically wrong. He is operating under the belief that if he keeps repeating the same lies over and over again that eventually they will be perceived as the truth. By constantly casting aspersions on my competency as a programmer, the quality of my work, and even the competency of my customers, he is guilty of gaslighting, but his words will never convince me that I am wrong for the simple reason that my code works, and any engineer, software or otherwise, will tell you that something that works cannot be wrong. Not only do my methods work, they actually produce results which are are more cost-effective than his simply because I can produce working components at a much faster rate that he can, and everyone knows that faster means cheaper. My framework also provides many features which are automatically available to every component "out of the box" without the need for additional coding.
An example of his fuzzy thinking can be found in this post in which he states:
If you are a more intelligent programmer, you'd find that interfaces drastically improve your efficiency
Excuse me, but how can taking the time to write code that contributes absolutely nothing to the application be called "efficient"? If you look at Object Interfaces you will see that in PHP they are an optional extra and have absolutely no effect on how the code is run. To my mind the writing of code which does nothing is not "efficient", it is a total waste of time and should be avoided at all costs.
In the very next sentence he said this:
And nope, my definition of efficiency is correct, by efficiency I mean the ability to write more code with higher quality.
I'm sorry, but your definition is a crock of sh*t. The following are correct definitions:
- Efficiency is the (often measurable) ability to avoid wasting materials, energy, efforts, money, and time in doing something or in producing a desired result. In a more general sense, it is the ability to do things well, successfully, and without waste.
- Efficiency is a measurable concept, quantitatively determined by the ratio of useful output to total input.
- effective operation as measured by a comparison of production with cost (as in energy, time, and money)
- the ratio of the useful energy delivered by a dynamic system to the energy supplied to it
- the good use of time and energy in a way that does not waste any
- the difference between the amount of energy that is put into a machine in the form of fuel, effort, etc. and the amount that comes out of it in the form of movement
Cambridge English Dictionary
The comparison of what is actually produced or performed with what can be achieved with the same consumption of resources (money, time, labor, etc.). It is an important factor in determination of productivity.
The Business Dictionary
- the state or quality of being efficient, or able to accomplish something with the least waste of time and effort; competency in performance
- accomplishment of or ability to accomplish a job with a minimum expenditure of time and effort
- the ratio of the work done or energy developed by a machine, engine, etc., to the energy supplied to it, usually expressed as a percentage
As you can see from above the term "efficiency" has nothing to do with writing more code with higher quality. "More" is not a factor. "Quality" is not a factor. "Less" is a factor. "Effort" is a factor. It is the ability to achieve a particular result with the minimum of effort and the minimum of waste. As an example, if it takes me 5 minutes to create the family of forms shown in Figure 2, but it takes you 5 hours, then clearly my method requires less effort and less time, which automatically makes it more efficient and more productive. If you think that my estimate of 5 hours is inaccurate then take my challenge and provide me with an actual figure.
In this post he said:
Its time to stop labeling yourself as productive. You've been developing your legacy Radicore for longer than a decade, and no significant improvement has been made thus far. You call that productive?
You are not productive, your definition of productivity is flawed because you are just writing more code with longer time, but you aint writing more code in a given unit of time.
I'm sorry, but your definition is a crock of sh*t. The following are correct definitions:
Productivity describes various measures of the efficiency of production. A productivity measure is expressed as the ratio of output to inputs used in a production process, i.e. output per unit of input.
The effectiveness of productive effort, especially in industry, as measured in terms of the rate of output per unit of input.
Oxford English Dictionary
the rate at which goods are produced or work is completed
A measure of the efficiency of a person, machine, factory, system, etc., in converting inputs into useful outputs.
The Business Dictionary
As you can see from above the term "productivity" has nothing to do with improving anything. "Improvement" is not a factor. "Output" is a factor. "Unit of Input" is a factor. It is the ratio of output to each unit of input. It is the ability to produce more with the same resources. As an example, if it takes you 5 hours to create the family of forms shown in Figure 2, but my method allows me to do it in 5 minutes, then clearly in the same amount of time I can produce more, which automatically makes my method more efficient and more productive. If you think that my estimate of 5 hours is inaccurate then take my challenge and provide me with an actual figure.
Productivity is not about writing more code in a given unit of time, it is about producing a result by writing less code, and the less code you have to write the less time it takes. This can be achieved by reusing code that has already been written. My framework has more reusable code than yours, which means that I have less code to write in order to achieve a result. Because I have less code to write I can achieve the result in less time, and this makes me more productive and more efficient.
My framework is not an application in its own right, it is a toolkit for building database applications. It allows me to very quickly build components which perform various actions on database tables. I have used this framework to build an ERP application which currently has 40 database tables and 2,500 tasks (use cases), and due to the amount of code which I did not have to write I did this at a faster rate than you could possibly achieve with your framework. I can produce the same result with less effort, which means that with the same effort I can produce more. Productivity is directly related to efficiency, it has nothing to do with improvement.
Whenever a module calls another module the two modules are sad to be coupled. This coupling can either be tight or loose. Loose coupling is better.
Tightly coupled systems tend to exhibit the following developmental characteristics, which are often seen as disadvantages:
- A change in one module usually forces a ripple effect of changes in other modules.
- Assembly of modules might require more effort and/or time due to the increased inter-module dependency.
- A particular module might be harder to reuse and/or test because dependent modules must be included.
The degree of loose coupling can be measured by noting the number of changes in data elements that could occur in the sending or receiving systems and determining if the computers would still continue communicating correctly. These changes include items such as:
- Adding new data elements to messages
- Changing the order of data elements
- Changing the names of data elements
- Changing the structures of data elements
- Omitting data elements
Polymorphism - the ability to substitute one class for another. This requires multiple classes to support the same method signature.
Singleton - restricts the instantiation of a class to one object. Multiple requests for an object of the same class will return the same object.
In this post he wrote:
Loose coupling has a lot to do with the ability to swap implementations
I disagree. The ability to swap implementations is called polymorphism which allows you to swap one object for another. Coupling is about the method signature which is used when one object calls another. Loose coupling helps to avoid the ripple effect when you need to make a change in the data elements which are passed from one object to another. Loose coupling and polymorphism are NOT related as it is possible to have one without the other.
In my framework I can add or remove a column from a database table and I NEVER have to change ANY method signatures in ANY of my Models, Views, Controllers or DAOs. I do not have to change ANY properties, and I do not have to change ANY getters or setters. Compare this example of tight coupling with this example of loose coupling and you will see the difference.
In this post he wrote:
When you use Singleton, your class is tightly coupled to the concrete singleton class, it removes possibility for polymorphism and makes it impossible for you to swap by a different implementation for purposes such as testing. DI is not loose-coupling itself, but it does offer a solution to your tight coupling problem
I disagree. A singleton is about obtaining a shared instance of an object and has nothing to do with method signatures. Loose coupling is about changing the data elements which are passed between objects without having to change any method signatures.
It is a fallacy that Dependency Injection (DI) automatically provides loose coupling. DI is all about where the dependent object is identified/instantiated, it has nothing to do with the construction or use of any method signatures.
In this post he wrote:
Polymorphism indeed has something to do with coupling, because loosely coupled application usually has a lot of polymorphism. With Singletons, you kill polymorphism since its impossible to swap implementation of this class.
I disagree. Polymorphism indicates coupling, but is not an indicator of loose coupling. It is possible to have polymorphism without loose coupling, and it is possible to have loose coupling without polymorphism.
In this post he wrote:
How can you have more polymorphism when you have so much tight coupling with Singleton
Firstly, the idea that you cannot have polymorphism with a singleton is wrong. Take the following code as an example:
<?php $class_name = "foo"; $object = singleton::getInstance($class_name): $result = $object->getData(); ?>
Here I can change the contents of the variable $class_name, obtain a singleton of the associated object, then call a method on that object without any problem whatsoever. I have been doing just that in my framework since 2005, so don't tell me it can't be done.
Secondly, I only use singletons when I have a dependency in my Model classes such as the validation object or the data access object. I do not require any polymorphism here as there is only ever a single implementation. Where I do make use of DI is when I inject a Model into a Controller or when I inject a Model into a View.
He is also very good at inventing crazy new rules just to reinforce his pathetic arguments. In this post he said the following:
I told you already, an abstract class should provide a bare minimum of functionality that all child classes find useful. If your abstract class contains methods that only 1-2 child classes actually need, then you are doing it very wrong. You clearly dont even understand why it is called 'abstract' in the first place, your implementation of a 'know-all' Default_Table class is exactly the opposite of how abstract classes should look like.
I have never seen the definition of an abstract class which says that it should provide only the bare minimum of functionality and that every method that it contains MUST be used by every one of its child classes. Inherited, yes. Available, yes. Used, no. In my framework the abstract table class exists in the Model/Business layer and is inherited by every one of my concrete table classes, of which there are currently 400. The abstract class provides all the methods (which may be either abstract or non-abstract) which are called by any of my 50 reusable Controllers, where there is a separate controller for each of my Transaction Patterns. This means that these methods do not have to be implemented by any developer as the implementations are inherited from a single shared source. These methods provide the basic and common functionality, but this standard behaviour can be enhanced or overridden in any table class by placing code in the relevant customisable methods. This arrangement means that once I have created a concrete table class I can generate a new task from any transaction pattern, and that task will be able to run immediately with default behaviour without the need for any coding by any developer. If the default behaviour needs to be changed then the developer puts the relevant code into the relevant customisable method. The method in the child class then overrides the method in the abstract class.
Note that my abstract class does not contain any abstract methods, so none of these methods needs to be defined with an implementation in the inheriting class. The only time that a method in the abstract class needs to be implemented in an inheriting class is when that implementation needs to be modified. It is therefore incorrect to say that any of my concrete classes contains methods that it does not actually use. When the class is instantiated into an object, that object may contain inherited methods that are not used, but so what? What problem does this cause? If it does not cause a problem then why does it need a solution? I certainly will not be implementing your solution as it would actually create problems where none existed previously.
It is true to say that not every method needs to be used by every controller, but there is no rule that says that it should. It is common practice and therefore perfectly acceptable for different controllers to access the same object using different subsets of the methods which are available. Where I have 50 different controllers which implement my 50 different Transaction Patterns, there is no rule which says that I MUST implement every one of those 50 patterns for every child class. For example, I have some methods which are only used by the File Upload, Output to CSV or Output to PDF patterns, but each of these patterns is only implemented for a small number of classes. The methods used by those patterns are available for use in every child class, but they may not actually be used simply because that pattern is not (yet) implemented with that child class. So what? Does this cause a problem? Does this cause unexpected behaviour? Does this require any extra effort from the developer to deal with the fallout? If the answer to all these questions is "No", then what exactly is the issue? What is wrong? If the only thing that happens is that it makes you unhappy then my only comment is the Anglo-Saxon equivalent of "Go Forth and Multiply!"
In this post he said:
If some child classes have no use for methods such as file uploading, then the data and behaviors associated with file uploading should belong somewhere else. It is irrelevant, because from the aspect of your child classes that do not support file uploading, these fields/methods are useless.
from the subclass of Default_Table point of view, it may or may not need functionality for file uploading and pdf/csv handling. In this case, those child classes without needs for such functionality should not have those redundant logic at all, but by inheriting your God class they have to receive those fields/methods that shouldnt belong to them. You are breaking encapsulation and achieving low cohesion by putting unrelated fields/methods into the same class.
Excuse me, but having an inherited method in a child class which is not actually used does not break encapsulation or lower cohesion. If there is a method in the abstract class which is not used then it is not defined in concrete class.
In this post he wrote:
So if the methods are not used by certain child classes, it means the child classes should not inherit this method
Excuse me, but when you inherit from an abstract class you automatically inherit ALL the properties and methods from that abstract class. There is no mechanism in any OO language which allows you to pick and choose subsets to inherit, so it is unacceptable for you to invent a rule for which no mechanism exists.
In this post he started to contradict himself:
I already explained again and again to you, that an abstract class is meant to provide functionality needed by all or most child classes, not functionality that only one or two child classes will find useful.
By changing his wording from all to all or most he has opened the door to admitting that not every method in the abstract class need actually be used in every child class. The words "not all" need not be interpreted as "most" as either "some" or "at least one" are perfectly valid. Provided that a method in a child class is used by at least one pattern/controller, then that method has a perfect right to exist in the abstract class so that it is available should that pattern ever be implemented within a child class.
In this post he changed his definition yet again:
The definition of 'abstract class' doesn't say all of its methods should be useful by every child class, but the definitions of cohesion and encapsulation say this. You have no way to achieve high cohesion and proper encapsulation if your child classes inherit methods that don't belong to them at all
The fact that some out of 400 child classes inherit methods which are not actually used is irrelevant. The abstract table class is filled with methods which may be used by any of the concrete table classes depending on which Transaction Patterns are used with that concrete class. There is no way to "uninherit" any unused methods, nor to inherit only that subset of the available methods that you actually need. An unused method does not cause a problem in the real world as it exists only in the abstract class and is never overridden in the concrete class, so if there is no problem then why do you keep insisting that there is?
Perhaps his confusion over my use of an abstract class lies in the fact that he believes that every method in an abstract class must automatically be an abstract method, in which case every one of these methods would need to be copied into every subclass. Being forced to define a method in a subclass which was not actually used would therefore NOT be a good idea. However, that is not how abstract classes work. This is what the PHP manual says:
PHP 5 introduces abstract classes and methods. Classes defined as abstract may not be instantiated, and any class that contains at least one abstract method must also be abstract. Methods defined as abstract simply declare the method's signature - they cannot define the implementation.
Note here that it says that methods within an abstract class may be defined as abstract methods, but they need not be, in which case they can provide an implementation. This means that if you inherit from an abstract class, and that abstract class contains any abstract methods, then you MUST define each of those methods in your concrete class so that you can define the implementation for each of those methods. On the other hand, if the abstract class contains any non-abstract methods, then none of those methods need be defined in any concrete class at all. The only time that you need to copy the signature for a non-abstract method from the parent class into your concrete class is when you want to override that method's implementation. If you do not override it, then the original implementation in the parent class will be used by default. If you do override a method then the parent implementation will not be executed unless you explicitly do so with a line of code which uses the Scope Resolution Operator.
The Gang of Four book on Design Patterns has this to say about abstract classes when describing a method of using class inheritance which avoids the side-effects which arise from their over use:
One cure for this [problematic implementation dependencies] is to inherit only from abstract classes, since they usually provide little or no implementation.
The phrase "since they usually provide little or no implementation" tells me that most programmers are failing to realise the full potential of abstract classes by having little or no code which can be reused. This totally negates the benefit of using OOP in the first place, which is supposed to increase the amount of reusable code and thereby reduce maintenance. This tells me that most programmers are failing to identify the correct abstractions in their software. I design and build nothing but enterprise applications which deal with large volumes of data concerning numerous different entities, where this data is stored in tables in a relational database, one table per entity. The application therefore does NOT interface with objects in the real world, it interfaces with tables in a database which contain information on those objects. I am constantly being told that I am not practicing OOD correctly, but when I ask the "is-a" and "has-a" questions which are supposed to be the backbone of OOD I come up with the following answers:
This then leads me to the following observations which are blindingly obvious to me, but which almost everybody else seems to miss:
I am constantly being told that "proper" OO developers DO NOT have a separate class for each database table, but their logic is so full of holes I wonder how they can possibly write software which is cost effective or even workable. If you look at Having a separate class for each database table is not good OO you will see the following statement made by a so-called OO guru:
Classes are supposed to represent abstract concepts. The concept of a table is abstract. A given SQL table is not, it's an object in the world.
If the author of that statement actually understood what he wrote he would see the following:
Template methods are a fundamental technique for code reuse. They are particularly important in class libraries because they are the means for factoring out common behaviour.
I firmly believe that I am using the features of OO in the way that they were designed to be used, with fewer side-effects, and producing large amounts of reusable code, so all my critics who are barking up the wrong tree.
In any engineering discipline there are laws that must be followed otherwise your project will fail. There are other things called "rules" or "guidelines" which have no effect on the success or failure of the project, but which are added to please the bureaucrats so that they can tick boxes on pieces of paper. For example, when building an aeroplane you must obey the laws of aerodynamics otherwise your plane will not get off the ground. The fact that a bureaucrat has a piece of paper on which all the boxes have been ticked will be irrelevant. For example, there may be rules which state how an aircraft should be built, what materials should be used, what shape it should be, et cetera, but these are far less important than the laws of aerodynamics. You may build a plane that ticks all the bureaucrat's boxes, but if it doesn't fly it is a failure.
If an innovator comes along with a different approach which allows aircraft to be built faster and cheaper by utilising different techniques, different materials and/or different shapes, the only important factor in the eyes of the paying customers is "Does it fly?" If it happens to be cheaper than his competitors then the innovator will take business away from them. In such circumstances it would be pretty pointless for the competitors to complain that the innovator is breaking the rules. The definition of "innovate" is
Make changes in something established, especially by introducing new methods, ideas, or products. Innovation requires change, and this sometimes means throwing out the current rule book and starting again with a new set of rules.
If I break an engineering law and something bad happens, then the fault is mine. My plane won't fly, my boat won't float, my bridge will fall down, my software won't work as expected. If I break a bureaucratic rule and nothing bad happens, then that rule has no practical purpose and its existence can be questioned. On the other hand the breaking of a bureaucratic rule may result in something pleasant, such as better performance, quicker delivery or lower costs. The fact that a bureaucrat is upset because he can no longer tick that box on his sheet of paper becomes irrelevant.
There are very few "laws" in the world of software engineering. Personally I can see only three:
When writing database applications for the web I am constrained by the following:
When it comes to programming style there is no "one size fits all" approach such as that being demanded by Hall of Famer. The only "rule" that I have followed in all the languages that I have used is based on the following statement from Abelson and Sussman in their book Structure and Interpretation of Computer Programs which was first published in 1985:
Programs must be written for people to read, and only incidentally for machines to execute.
This requires the use of meaningful data names, meaningful procedure names, a structure that can be shown in a diagram, and logic that can be easily understood. Along the way I have adopted other principles such as KISS, DRY and YAGNI, but anything else I see as a passing fancy and not a universal law that must be obeyed without question. In order to write effective software I have to obey the laws of software engineering, but standards and best practices are not laws, they are guidelines. As a pragmatic programmer I will only follow those guidelines which have proved their worth in my 30+ years of programming experience. As Petri Kainulainen says in his article We Should Not Make (or Enforce) Decisions We Cannot Justify I have the right to question any rule in order to evaluate its actual worth. Any guideline which stands in my way will be brushed aside, and provided that my paying customers are impressed with the result I couldn't care less about upsetting the delicate sensibilities of some petty bureaucrat.
Fundamental - forming the base, from which everything else develops
Evolve - to gradually change over time
In this post I made the following statement:
OOP consists of nothing more than encapsulation, inheritance and polymorphism, and everything else is an optional extra.
He replied with the following
those are NOT add-on definitions of OOP, they are fundamental and universally agreed concepts.
I pointed out to him that the fundamental or original definition of OO was made by Alan Kay who invented the term where he stated that OO consists of nothing more than encapsulation, inheritance and polymorphism, so anything added after that is an optional extra. There are now many different languages which implement OO theory in different ways, into which different people have added in their own ideas of how OO could be "improved", but all these ideas were later additions and not included in the original definition of OO.
Logic - program code which performs a function
Information - data which is processed by code
In this post he made this statement:
When a class has 9000 lines, it is guaranteed to have more than one responsibility
I tried to explain that my framework had been separated into an adequate number of components by virtue of the fact that it implemented a combination of both the 3-Tier Architecture and Model-View-Controller design pattern. As my so-called "God" class contains logic which is restricted to operations performed in or by the Model, and contains no logic which rightly belongs in a Controller, View or DAO, it does not break SRP at all. I asked him to point out any logic in my abstract class which should be in one of the other components, and in this post he answered with:
I stopped reading once I found file uploading logic/responsibility in that class
He made the same accusation in this post, to which I replied with:
No table class performs any file uploading as that is done within the Controller. It merely has methods which supply the destination directory and file name, plus a post-upload method which allows for additional business rules to be specified after the upload has been completed.
In this post he continued his argument with the following:
Even though the actual file uploading is done in your controller class, your god class does contain File uploading logic. Its not just a courier of information, it actually processes this information.
Hall of Famer clearly does not understand the difference between logic and information, that having a method in a Model which passes information/data back to the Controller which performs the file upload is not the same as having logic/code within the Model that performs the file upload. The Model passes the data to the Controller, and it is only the Controller which contains the logic/code which processes this data.
Hall of Famer seems to have the notion that just because a feature exists in the language then it should be used, or if a programming principle has been created by a supposed "mastermind" then it should be followed. Intelligent people know different. Unlike a mountain climber whose justification for climbing a particular mountain is "Because it is there!", in computer programming you only use a particular feature of the language, a particular function, or particular syntax when it is helps you achieve the objectives of the program or library which you are writing. I have used many languages in my long career, and I have rarely used more than 50% of the features in any of those languages. Why not? Because I had no use for them. Just because they are useless to me does not mean that they are also useless to everybody else.
Although I could use a new feature that has been added to the language it is not the same as should. Sometimes a feature is added to provide coverage for a topic that was not covered previously, but sometimes it is added just to provide a different way of doing something that can already be done. Each time a see a new feature in the language I ask myself some simple questions:
If I don't have the problem that a feature was meant to solve then I have no use for that feature. This includes namespaces.
If the cost of changing existing code to do the same thing, but in a different way, is greater than the benefits that would be provided, then use of that feature cannot be justified. I would put autoloaders and short array syntax in this category.
If a feature was designed to solve a problem in the language that no longer exists, such as object interfaces, then that feature is dead and should be removed. Object interfaces did not exist in PHP 4 as they were not necessary. They were only added in PHP 5 because a vociferous (i.e. loudmouthed) group of developers used the pathetic argument that "interfaces exist in other OO languages, so they should be in PHP as well".
Structured programming is a programming paradigm aimed at improving the clarity, quality, and development time of a computer program by making extensive use of subroutines, block structures, for and while loops - in contrast to using simple tests and jumps such as the go to statement which could lead to "spaghetti code" causing difficulty to both follow and maintain.
Monolithic System - A software system is called "monolithic" if it has a monolithic architecture, in which functionally distinguishable aspects (for example data input and output, data processing, error handling, and the user interface) are all interwoven, rather than containing architecturally separate components.
Monolithic Application - describes a single-tiered software application in which the user interface and data access code are combined into a single program from a single platform. It is designed without modularity. Modularity is desirable, in general, as it supports reuse of parts of the application logic and also facilitates maintenance by allowing repair or replacement of parts of the application without requiring wholesale replacement.
Spaghetti code is a pejorative phrase for source code that has a complex and tangled control structure, especially one using many GOTO statements, exceptions, threads, or other "unstructured" branching constructs. It is named such because program flow is conceptually like a bowl of spaghetti, i.e. twisted and tangled. Spaghetti code can be caused by several factors, such as continuous modifications by several people with different programming styles over a long life cycle. Structured programming greatly decreases the incidence of spaghetti code.
In this post he made the following statement:
Yes it is possible to write well-structured Java and COBOL code, I never said you cannot. However its impossible to write well-structured procedural code. Procedural code can be structured, but only badly structured. Writing procedural Java code, then its badly structured. Writing COBOL code with OO design, then its well structured.
You should see here that he is contradicting himself:
As COBOL is a procedural language then one of those statements must be a lie.
In this post he wrote this:
Your code is properly layered, but improperly structured
He has said more than once that my code is unstructured and spaghetti-like, which means that he does not understand what those terms actually mean. When you compare my code against the correct definitions (see above) you should see the following:
This should prove that my code is properly layered, therefore it is modular and not monolithic. It also uses structured programming techniques, therefore cannot be called spaghetti-like.
There is one acid test to see in an application is well-structured or not - produce a structure diagram, preferably on a single page. If you cannot produce a structure diagram similar to what I have produced for the RADICORE framework, then where is the proof that YOUR application/framework is properly structured?
The dictionary definition of "extreme" contains the following descriptions:
The dictionary definition of "moderate" contains the following descriptions:
Despite me telling him over and over again that I have followed a reasonable and moderate interpretation of SRP by implementing a combination of the 3-Tier Architecture and Model-View-Controller design patterns, which, like Robert C Martin's description of SRP, identify only three areas of logic which should be separated, he continued his argument in this post in which he said:
you fail to understand that SRP applies to not only the tiers/layers, but also to the subcomponents inside each tier/layer.
The descriptions of the 3-Tier Architecture, MVC and SRP only identify three areas of logic each. There is absolutely no mention of any need to go any further with the process of separation.
He followed up with this post:
There may be only 3-4 tier/layer, but tier/layer = responsibility only in extremely simple applications. So you see? My interpretation is not extreme, your interpretation is. In fact, when you have MVC structure, this simple assumption already fails, since both controller and view belong to UI layer and they are two different responsibilities. In this case, the UI logic already contains two responsibilities. And the fact that Robert C Martin didn't say these areas can be broken down further, doesn't mean they cannot or shouldn't.
Here he is saying that if a principle or pattern only identifies three areas of logic that can be separated, then only dealing with those three areas is "extreme" while subdividing each of those areas into smaller units is nothing more than "moderate".
He he also saying that just because a definition does not explicitly say that something should not be done does not mean that it should not be done, that unless it explicitly says that you should not do it then by implication it means that you should do it. This is twisted logic. Every instruction manual ever written has followed the same simple pattern - it identifies only those things which are relevant, which may include a list of Do's and Dont's. Anything that it does not mention is therefore not relevant, so if you do something which has not been identified as relevant then you should be prepared to accept the consequences and not blame the instruction manual.
The dictionary definition of "dogmatic" contains the following descriptions:
The dictionary definition of "pragmatic" contains the following descriptions:
In this post he made the following statement:
I am not a dogmatist, I am a pragmatist. I am flexible and I accept many ideas and opinions. The reason why I don't accept your is that, your idea really is complete bullshit
In this post I replied with:
I'm afraid you have got those two the wrong way round. A dogmatist spends too much time following the rules without any regard or the result. A pragmatist concentrates on the result and ignores any rules which get in the way. You are the one who insists on extreme interpretations of the rules, and I am the one who ignores the rules based on those absurd interpretations.
In this post Hall of Famer said the following:
you are the dogmatist, you spent too much time following your own set of rules so you ignore the valid points and standards from the elites/masterminds
Here he is redefining those words to mean something completely different:
In this post he made the following statement:
MVC in general is an incomplete architectural pattern that only distinguishes three components in your application, but not all. In a smaller and simpler application, sure M, V and C themselves alone are sufficient. In more complex applications, you need more components, such as DAO, Service Objects, Helper Objects, etc. You call yourself a pragmatist, not a dogmatist, then shouldn't you strip yourself out of the MVC dogma?
Here he appears to be saying that if you follow the description of a pattern as it is written then you are a dogmatist, whereas if you go beyond what is written an apply a more extreme interpretation then you are a pragmatist.
How is it possible to have a meaningful discussion with someone who keeps redefining words to suit their own point of view?
In this post he wrote:
I was saying that competent programmers should not use the outdated syntax, style and convention from PHP4 era, do you see my emphasis on the word 'outdated'. If the syntax and style are not outdated, then it becomes part of the new PHP5 and PHP7 standards, you clearly can use them
Syntax is not outdated if it is still available in the language. Code is not outdated unless it uses syntax which is not available in the current version of the language. There is no such thing as a particular "style" in which the syntax can be used, and this fictitious "style" does not change with each new version of language. Each language, and each new version of a language, may offer slight differences in syntax, but I can use whatever syntax is available in whatever style I see fit in order to produce the results expected by my paying customers. Those results are gauged by how cost-effective they are and NOT by the style in which they were written.
In this post he wrote:
The fact that you write PHP 4 style code proves further that you are incompetent, lazy and you write bad code that is nowhere near today's standards.
He is talking himself into a hole here. If I use syntax which was available in PHP4 but no longer available in PHP7 then I agree that my code would be outdated and could be classed as "legacy", but this is not the case. I have explained several times that I have always ensured that my code runs on the latest version of PHP, and I have always modified it whenever a feature has been removed or deprecated. As my code runs in the latest version of PHP7 it is as current and up-to-date as it need be. Note that it is *NOT* a requirement to use every feature that is available in the language, only those features that prove to be useful in your application. This is why I do not use any of those optional extras which were added in PHP5 - my code works without them, and using them would not add value to my code, so I have no use for them. This is not incompetence it is common sense.
In this post he wrote:
For instance, the industry standards for maximum LOC in a class is around 1000.
This is NOT an industry standard, it is just common practice within a small group of programmers. Different groups have their own practices, so there is no such thing as a single set of practices which is common to all groups.
I have read some articles which specify no more than 10 methods per class, and no more than 10 LOC per method. I have even seen an article which reduced this number to 5. These are all arbitrary and artificial numbers which I choose to ignore. I prefer to stick with the original definition of encapsulation which requires that ALL the properties and ALL the methods for an entity should be placed in the SAME class. Note that the word "all" is limitless. I also follow the description of high cohesion by ensuring that my abstract table class, which is inherited by every Model class, contains only those methods which perform the following:
In does NOT contain any logic which rightly belongs in either the Controller, View or DAO, so it fits the description of SRP as provided by Uncle Bob.
I asked him why he kept insisting that his personal interpretation of the rules could be regarded as "best practice", "industry standard" or "universal standard", to which he replied as follows:
In this post he wrote:
I didnt make it industry standard myself, the industry leaders/elite programmers made it so
In this post he said the following:
when majority of professional and elite programmers hold the same opinions, they become universal agreement.
In this post he said the following:
it is universal so long as the majority of programmers agree with a certain concept. It doesn't need to be 100%, since we need to factor out incompetent and trollish programmers like you
The FIG standard was originally just the standard for this one group, but once it becomes widely adopted by most PHP programmers, it becomes the industry standard.
Where is your evidence that "most" PHP programmers follow the FIG standards? How many PHP programmers are there? How many have signed up to follow FIG?
In this post he wrote:
it is an industry standard, these group of coders are the industry giants (ie. FIG) and they make the industry standards. Its not a personal preferences, its voted and approved by tens or even hundreds of elite coders.
When did this vote take place? How many votes were counted?
In this post he wrote:
the industry giants make standards such as PSR that every competent programmer follows nowadays. They become industry standards because most programmers agree with it.
How can you quantify "most"? Where are your figures?
In this post he wrote:
Sure there are masterminds with different and contrary opinions, but in the very end the majority rules
I disagree. The majority of programmers do NOT define the standards which all programmers must follow. The majority of programmers in a particular group MAY define the standards which all programmers in that particular group should follow, but their ability to dictate or influence does not extend beyond their particular group.
In this post he wrote:
you are the dogmatist, you spent too much time following your own set of rules so you ignore the valid points and standards from the elites/masterminds
Just because I follow a different set of rules from you does not make me a dogmatist. You are a dogmatist because you place more importance on the following of rules than the production of results. I am a pragmatist because I place more importance on the production of results than the following of arbitrary and artificial rules.
When asked to supply a definition of this "elite" in this post he said:
Those elite programmers are the masterminds such as Alan Kay, Robert C Martin, Martin Fowler, Bertrand Meyer, David Hayden, etc. These elite programmers are authors of various books and articles regarding computer science, software engineering and object oriented programming.
He failed to notice that a whole bunch of these "elite" programmers, namely Edsger W. Dijkstra, Alan Kay, Paul Graham, Richard Mansfield, Eric Raymond, Jeff Atwood, Linus Torvalds, Oscar Nierstrasz, Rich Hickey, Eric Allman, Joe Armstrong, Rob Pike, John Barker, Lawrence Krubner and Asaf Shelly, are listed in the article What's Wrong With Object-Oriented Programming? as being very unhappy with the way that the principles of OOP are being implemented. So if you think that you and your cohorts are following what these people described, then how come these "elite" people consider that what you are producing is utter crap?
His big problem here is that he considers "common practice" within one small group of programmers to be the same as "best practice" for every programmer. I do not accept this notion, and neither do any programmers who hold different views to Hall of Famer and his cohorts. It is also wrong of him to dismiss those who hold contrary views as trolls or incompetents. There is no single definition of OOP which satisfies everybody. There is no single method of implementing OOP which satisfies everybody. Different groups of programmers write different types of applications with different sets of objectives, and each group is free to use whatever language, methodology or framework they choose in order to get the job done. There is no "one-size-fits-all" language, there is no "one-size-fits-all" methodology, and there is no "one-size-fits-all" framework. This therefore means that there is no "one-size-fits-all" set of standards. I have never seen a document labelled "universal programming standards", so they do not exist. I have never been asked for my opinion on such a document, so why should I accept its contents? The FIG group does not speak for the entire PHP community, neither do those pseudo-experts who air their opinions on stackoverflow or github. The only programming standard which could be regarded as being universal is the statement that was made in 1984 by H. Abelson and G. Sussman in "The Structure and Interpretation of Computer Programs":
Programs must be written for people to read, and only incidentally for machines to execute.
Anything other than this could be regarded as anal-retentive nit-picking.
The internet is a wonderful place to have a meaningful exchange of ideas with intelligent people who are willing to discuss different ideas on their merits. It is a pity that it is also inhabited by bigoted, narrow-minded, anal retentive zealots and dogmatists like Hall of Famer who are incapable of accepting any views which they consider to be unorthodox, unapproved and therefore heretical. He seems to think that stating my opinion is wrong is the same as proving that it is wrong. Instead of responding to my arguments with counter-arguments which are based on logic he resorts to insults such as the following:
Your viewpoints are mostly incorrect, inferior and confused.
I find your incompetency as a programmer amusing.
I am not insulting you or your work, you are an incompetent programmer and your work is utter trash.
If his views are being taught as the only way to do "proper" OOP then God help the software industry.
My technique is called different, unorthodox, and unapproved simply because of the number of rules that I break. I break these rules because I consider them to be artificial and counter-productive. I can write superior software without them, so I prefer to be a writer of superior software than a follower of rubbish rules. Using my so-called heretical and inferior techniques I can create that family of forms in less than five minutes and without writing a single line of code - no PHP, no HTML and no SQL. How? I can achieve this level of productivity simply because my approach is pragmatic instead of dogmatic, which means that I don't follow artificial rules and hope the result is satisfactory. Instead I concentrate on producing the best result possible and ignore any so-called "rule" which gets in the way. Below is a side-by-side comparison of the "right" way (i.e. approved by the paradigm police), and my not-right-therefore-it-must-be-wrong way:
|the "approved" way||the Tony Marston way|
|1||In the world of Object Oriented Design the software is king, and the database is nothing but an "implementation detail".||Wrong. Refer to Object Oriented Database Programming for details.|
|2||Everything is an object.||Wrong. Refer to Object Oriented Database Programming for details.|
|3||Use Object Oriented Design (OOD) with its "is-a" and "has-a" rules.||Wrong. Ignore OOD completely as adequate results can be obtained directly from the Database Design. Every entity in the business/domain layer "is-a" database table, and each table "has" nothing but its own set of columns, keys and relationships. Each of these columns is represented as a simple value, not an object. Each table has its own set of business rules, so the table's class is the obvious place to define all of those rules.
I have seen too many examples where someone says "We need a CUSTOMER table, but as each customer "is-a" person we must start with a PERSON class and inherit from this to create a CUSTOMER class". There is no such concept in database design, so to include it in the software would be a pointless complication.
Further details are also available in the following:
|4||Use object composition or object aggregation to group several database tables into a single class.||Wrong. Refer to Object Oriented Database Programming for details.|
|5||Favour composition over inheritance||Wrong. Refer to What is/is not considered to be good OO programming for details.|
|6||Use associations to define a relationship between classes of objects that allows one object instance to cause another to perform an action on its behalf.||Wrong. Refer to Object Oriented Database Programming for details.|
|7||Design complex class hierarchies, where one concrete class is extended to form another concrete class. Such hierarchies can be five or more levels deep.||Wrong. No table in a database is ever extended to form another table, so no table class is ever extended to form another table class. Each concrete class "is-a" database table and can therefore inherit from a single abstract table class, thus producing a class hierarchy which is only one level deep. I never extend a concrete table class to form another concrete class, which means that I never have the problems which other people have encountered.|
|8||Identify all the methods/operations which can be performed either on or by the entity which the table represents.||Wrong. The software is NOT interacting with any external entities, only tables in a database, so only those operations which can be performed on a database table are relevant. These operations are Create, Read, Update and Delete, and are generic enough to be defined in the abstract table class which is inherited by every Model class.
Once I have designed a properly normalised database all I need do is create a separate class for each table. This is generated by the framework rather than being created by hand.
Each use case is then implemented as a different user transaction which links one of the Transaction Patterns to a Model. The Controller which implements each of these patterns will only use those generic methods which were inherited from the abstract table class.
|9||For use cases such as "Create Account", "View Account" or "Pay Invoice" you must create methods with the same names.||Wrong. Refer to Object Oriented Database Programming for details.|
|10||Create a separate property for each column in the table.||Wrong. Refer to Object Oriented Database Programming for details.|
|11||Create a getter and setter for each column in the table.||Wrong. Refer to Object Oriented Database Programming for details.|
|12||You must use the constructor to populate an object with data.||Wrong. Refer to Object Oriented Database Programming for details.|
|13||Data must be validated before it is put into the Model as the Model is not allowed to contain invalid data.||Wrong. Refer to Object Oriented Database Programming for details.|
|14||You must validate each value within its setter.||Wrong. Refer to Object Oriented Database Programming for details.|
|15||Each object can only deal with a single row from a database table.||Wrong. Refer to Object Oriented Database Programming for details.|
|16||To implement MVC a Controller can only access a single Model.||Wrong. Refer to Object Oriented Database Programming for details.|
|17||To implement MVC a Model can only be accessed by a single Controller.||Wrong. Refer to Object Oriented Database Programming for details.|
|18||Build each Model class by hand to contain all the properties and methods which have been identified.||Wrong. Refer to Object Oriented Database Programming for details.|
|19||Hand craft a separate controller for each object in order to call all its methods.||Wrong. Refer to Object Oriented Database Programming for details.|
|20||Hand craft a separate view for each object in order to display its data in whatever formats may be required.||Wrong. Refer to Object Oriented Database Programming for details.|
|21||Hand craft a separate script for each task (user transaction)||Wrong. To generate a new task or a family of tasks use the Data Dictionary to select a table, link it with a Transaction Pattern, then press a button to generate a new component script for each task. Note that this procedure will automatically add the relevant entries to the MENU database.|
|22||Create a separate Data Access Object for each table in the database which can construct all the necessary queries for that table.||Wrong. I do not have a separate DAO class for each table, only for each supported DBMS - currently MySQL, PostgreSQL, Oracle and SQL Server. This handles all communication with the physical database. The abstract table class, which is inherited by every Model class, contains all the relevant methods to instantiate and pass control to the relevant DAO as and when necessary.|
|23||Use an Object Relational Mapper (ORM) to handle all communication with the database.||Wrong. Refer to Object Relational Mappers for details.|
|24||Create a variety of finder methods to access records in each database table.||Wrong. Refer to A minimalist approach to Object Oriented Programming with PHP for more details.|
|25||Use as many Design Patterns as possible, especially in weird combinations, just to prove how clever you are.||Wrong. You should not make a list of 'cool' patterns and then try to implement them, you should design and code incrementally, and only refactor to patterns as you understand more of the problem. A design pattern should only be applied when the flexibility it affords is actually needed. Refer to Design Patterns - a personal perspective for details.
IMO design patterns are the wrong level of abstraction. I get much more reusability from Transaction Patterns, as described in Design Patterns are dead! Long live Transaction Patterns!.
|26||Implement all the SOLID principles until they can be implemented no more, just to prove how clever you are.||Wrong. These principles were badly written as they do not provide clear, concise, accurate and unambiguous definitions. This has led to them being redefined and reinterpreted so many times that the original idea has been totally lost. It is therefore impossible to follow one of these interpretations without upsetting the supporters of the others. This is a lose-lose situation as whatever you do will be perceived as wrong by someone somewhere.
For a more detailed critique please refer to Not-so-SOLID OO Principles.
|27||Use exceptions for all validation errors.||Wrong. Refer to Exceptions for details.|
|28||You must use object interfaces.||Wrong. Refer to Object Interfaces for details.|
|29||You must use the visibility options.||Wrong. Refer to Visibility for details.|
|30||You must keep up with all the latest features in the language.||Wrong. I designed and built my PHP framework using PHP 4, and as this provided full support for encapsulation, inheritance and polymorphism I managed to achieve the objectives of OOP by creating more reusable code and reducing code maintenance. Although various new features have been added to PHP 5 in order to support what others seem to think are "essential" in OOP, I ignore these optional extras because the time spent in adding them to my code would be greater than any value that would be added. Anyone who understands the concept of Cost-Benefit Analysis or Return on Investment will understand my reasoning.|
|31||You must hide the fact that your software is communicating with a database||Wrong. Refer to Object Oriented Database Programming for details|
|32||You must use a Front Controller||Wrong. Refer to A minimalist approach to OOP with PHP for details.|
|33||You must not use global variables||Wrong. Refer to Your code uses Global Variables for details.|
|34||You must not use singletons||Wrong. Refer to Your code uses Singletons and Singletons are NOT evil for details.|
|35||Each object must have a unique identity||Wrong. When I create an object it is put into a variable which has a name, and that name is more than sufficient. Each object is used to manipulate data, and after each request has processed it simply dies, and all in-memory objects used in that request die with it. Objects do not persist after the request dies, but any data which was placed in a database does. Objects are not stored in a database, only tables and columns. Object Oriented databases are an idea that has never materialised, so the notion of storing objects in a database is a pipe dream. For each new request any persisted data can be pulled from the database and placed in a brand new object, so the need for object identity does not exist. Each record in the database has its own identity (called a primary key) and I have never found a use for anything other than this.|
|36||You must use mock objects||Wrong. Refer to Object Oriented Database Programming for details.|
|37||You must create a class diagram||Wrong. Refer to Object Oriented Database Programming for details.|
|38||You must use immutable objects||Wrong. Refer to Object Oriented Database Programming for details.|
|39||All class constructors must be empty||Wrong. Refer to Object Oriented Database Programming for details.|
|40||An abstract class must contain mostly abstract methods||Wrong. Refer to He does not understand that an abstract class may contain non-abstract methods. for details.|
|41||You should not have a separate class for each database table||Wrong. Refer to Having a separate class for each database table IS good OO for details.|
|42||You should practice Domain Driven Design||Wrong. Refer to Why I don't do Domain Driven Design for details.|
|43||You should use hyperlinks to transfer control from one form to another.||Wrong. In order to jump from a parent LIST form to a child ADD, ENQUIRE, UPDATE or DELETE form you can use either of the following techniques:
Buttons offer a more user-friendly experience. Another disadvantage of using hyperlinks is that you must include the primary key of the selected row in the URL, and this is considered to be a huge security risk. By posting into the current form the framework will create an area in the $_SESSION array to hold all the arguments for the child form before switching to that form automatically. When that child form is activated it will use that session data. As that data exists on on the server it is never exposed to the client
|44||You should use Value Objects||Wrong. Refer to Improving PHP's Object Ergonomics for details.|
Do you know of any other "rules" which I am breaking?
A large number of people fail to realise that the primary function of a software developer is to develop cost-effective software for the people who pay their wages. It is NOT to develop software according to an arbitrary set of artificial rules in the hope that the outcome will be acceptable. As I have said in an earlier article, programming is an art, not a science, so unless a person has the basic talent to begin with it will be very difficult to turn that person into a skilled artisan. Instead you will end up with a bunch of Cargo Cult programmers or Copycats.
I started to use an OO-capable language (PHP) in 2002 when the definition of OO was quite simple and had not yet been corrupted into a confusing mess. The definition of OOP that I used then, and still use today, goes as follows:
Object Oriented Programming is programming which is oriented around objects, thus taking advantage of Encapsulation, Inheritance and Polymorphism to increase code reuse and decrease code maintenance.
In my view an OO language is exactly the same as a procedural language except for the addition of Encapsulation, Inheritance and Polymorphism. When rebuilding my old development framework in this new language I had two goals in mind:
I had already become familiar with the 3-Tier Architecture in a previous language, and I was so impressed with its results that I decided to build my new framework around this architecture. This would require having separate components in the Presentation layer, Business layer and Data Access layer.
The first difference between procedural and OO programming is that procedural code is arranged into functions or procedures while OO code is arranged into classes and methods. Each procedural function comes as a single stand-alone instance which does not have any state, while in OO a number of functions can be grouped together in the same class and become class methods. Unlike procedural code, an OO class can have state in the form of class variables/properties and can be converted into a number of instances which are called objects.
The process called Encapsulation allows you, once you have identified a business entity, to group together all that entity's data (properties or variables) and operations (functions or methods) in the same class. Note here that you do NOT put all a program's functionality into a single component/class as this would produce a monolithic structure instead of a layered structure. You also do NOT put each function into its own class as all related functions should be grouped together in the same class in such a way that you maximise cohesion and minimise coupling.
In my Business layer I decided to have a separate class for each table in the database, and as all these table objects would have a lot of common code I wanted a way to share it instead of duplicate it. I ended up by putting all this common code into an abstract table class which I could incorporate into each concrete table class through that OO mechanism called inheritance. I now have 9,000 lines of code in my abstract class which is shared by 400 table classes, so that is a LOT of reusability. So why do some confused souls keep telling me that this is too much reusability? Is there such as too much reusability? Who decides how much is "too much"?
I removed all database access from the Business layer objects and placed it into a separate Data Access Object (DAO). I started with a class for the original MySQL extension and when MySQL version 4.1 was released I created another for the improved MySQL extension. This meant that I could switch from the original database extension to the improved one without having to change anything in the Business layer. Later on I added classes to deal with the PostgreSQL, Oracle and SQL Server databases. This means that when customers use the large ERP package which I have built they are not tied to a DBMS of my choosing, they can choose from the options which are supported.
While building my Presentation layer components to handle the HTTP requests and responses I decided to generate all HTML output using XSL Transformations which required the use of XML documents and XSL stylesheets. I built a separate component to extract all the data from a Business object, write it to an XML document, then transform it into HTML. It was then pointed out to me that this arrangement conformed to the description of the Model-View-Controller design pattern as I now had Controller and View components in the Presentation layer, and Model components in the Business layer.
So how much reusability did I achieve?
More details are provided in Levels of Reusability.
Having a larger amount of reusable code should mean that you can get things done by calling a pre-written function instead of having to write that code again, and as writing less code takes less time it should result in being able to achieve more in a given time frame, which in turn should lead to an increase in productivity. As I have spent the vast majority of my career working for software houses instead of end users, this means that I have learned to compete for business against rival software houses. In order to win business in such a competition you have to be able to demonstrate that your solution will be more cost-effective than that of your rivals, and the ability to be more productive (i.e. create software at a faster rate and therefore lower cost) than your rivals is always a good selling point.
The above list identifies all the steps that you DON'T have to take in order to write effective software. By ignoring the "approved" methodology and following one which, based on the KISS principle, is as simple as possible and which bypasses unnecessary complexity, I have created a framework which takes all the drudgery out of writing database applications. With it I can create a simple family of forms without writing a single line of code, but which covers all the basic operations and which has access to all the features provided by the framework. Only a small amount of code is actually generated by the framework:
Each of these scripts is so small for the simple reason that all the "heavy lifting" is done within the framework by volumes of pre-written and reusable code. If the stated purpose of OOP is to provide ways of increasing the volume of reusable code, and I have done just that, how can you possibly tell me that my methods are wrong? Unless you are capable of producing comparable amounts of reusable code I would suggest that it is your methods which need to be questioned, not mine.
Some people might think that due to the simplicity of the above tasks that my framework is only capable of dealing with simple CRUD screens. Think again. The example shown in Figure 2 uses only 6 of the available 50+ items in my library of Transaction Patterns, so there is a pattern for every possibility that I have encountered in the 30+ years that I have been designing and building enterprise applications. In addition to the basic functionality that each pattern provides, any of this behaviour can be extended or replaced by putting appropriate code in any of the available customisable methods.
Building database applications with my RADICORE framework is a breeze, provided that you start with a database design. It does not matter if the database design changes during the development process as such changes can be dealt with quite easily by the framework. The number of database tables is largely irrelevant, as is the number of tasks required to deal with those tables. Anyone who builds enterprise applications should be aware of Len Silverston's Data Model Resource Book in which he provides a library of universal data models for all enterprises. In 2007 I started building an enterprise application based on several of his database designs:
In a short space of time I had a working application which I could demonstrate to my client. By "short space of time" I mean six man-months, which equates to an average of one month per database. Can your framework match that? If not then please stop telling me that my methods are wrong when clearly my results are superior.
This enterprise application is still going strong, and to date it has 369 tables, 718 relationships, and 2,900 user transactions. It is being sold to multi-national organisations all over the world through my business partner Geoprise Technologies.
Let me prove my superior productivity with an example and a challenge. Every database application consists of one or more databases containing one or more tables, and each table has its own set of columns, keys and indexes. Tables may be related to other tables in what are known as relationships where one table (the child) contains a reference (known as a foreign key) to another table (the parent). Relationships will also require some form of referential integrity.
In order to maintain the contents of a database table the application software will have a number of user transactions (sometimes known as "tasks" or "use cases") each of which will perform one or more actions on one or more tables. In an online system each task will communicate with the user by means of a User Interface (UI), also known as a form or screen, on a client device such as a PC, tablet or smart phone. Each transaction will have a name which is meaningful to the user, such as "Create Account", "View Account" or "Pay Invoice", and these names will appear on some sort of menu system which enables the user to quickly activate the relevant task for a specific business operation. In my own enterprise application, for example, I have over 400 database tables and over 2,500 tasks.
Suppose I add a new table to my application database and then want to implement a standard family of CRUD forms as shown in Figure 2:
Figure 2 - A typical Family of Forms
Each of the six components above will perform a single function on that single table. The LIST and ENQUIRE tasks will perform a READ operation on that table, the ADD will perform a CREATE, the UPDATE will perform a READ and an UPDATE, the DELETE will perform a READ and a DELETE, while the SEARCH will supply data which is used by the LIST1 task to filter its results. Note that each task in that diagram is a hyperlink which will take you to a full description of their structure and behaviour.
As well as constructing the six new components they should also appear as options in the application menu so that they can be run immediately. The parent LIST task should be made available on a menu screen, while its associated child tasks should only be available as options on the navigation bar within the parent task.
Here is the challenge: using your favourite framework how much code would you have to write to implement that family of forms and how long would it take? If you have to write ANY code, or it takes longer than FIVE minutes then I'm afraid that you can back up your bags and go home as you have failed!
If you cannot do what I do then you have no right to tell me that I am wrong. If your standards don't allow you to produce software of the same quality and at the same speed as mine then I'm afraid that it's your standards which are crap, not mine.
When my critics such as Hall of Famer make such statements as
When a class has 9000 lines, it is guaranteed to have more than one responsibility they are basing this assumption on size instead of substance. There is no limit to the size of a class, either abstract or concrete. The only deciding factor is called cohesion, and that requires a modicum of intelligence that goes beyond the ability to count higher than ten without having to take your shoes and socks off. My architecture, as shown in Figure 1, is based on a reasonable and moderate interpretation of SRP where the logic has been separated into a Model, View, Controller and DAO. My so-called GOD class is an abstract class which is inherited by every Model, and which contains all the methods necessary to handle the communication from the Controller, to the DAO, and everything in between. It does not contain "most" program logic within the application, only 17%. There is absolutely no program logic in the Model which belongs in the Controller, View or DAO, so any accusation that it breaks SRP is completely bogus. Unless of course, you have a perverted and extreme interpretation of SRP, which I do not.
When my critics make such statements as
You are not following the standards, therefore your code must be crap they are assuming that it is only their interpretation of the standards that leads to the production of good software. These people cannot tell the difference between Best Practices and Excess Practices. The simple truth is that my moderate interpretation of the standards, and the way in which I have implemented them, has allowed me to create enterprise applications in a more cost-effective manner than any of my rivals. This is because I have fulfilled the objective of OOP and created more reusable components than I could with any of my previous non-OO languages. Using my open-source RADICORE framework I can create initial working user transactions without having to write a single line of code, a feat which is impossible by those who follow the "approved" standards. My methods work, so they cannot be wrong. My methods produce superior results, so they cannot be crap. Only a methodology which produces inferior results can be regarded as crap, so I would suggest that Hall of Famer looks closer to home if he wants to find crap.
As you can see from the above list I do not follow a significant number of the techniques/principles/rules which have been approved by others in the programming community. The primary reason for this is because these rules simply did not exist when I made the switch to an OO-capable language after working for several decades in a variety of other languages. What resources I found on the internet made no mention of them at all. All I found were a few small articles which provided basic examples of how to implement the principles of OOP which were described as follows:
Object Oriented Programming is programming which is oriented around objects, thus taking advantage of Encapsulation, Polymorphism, and Inheritance to increase code reuse and decrease code maintenance.
This told me that the only difference between an OO language and a non-OO language is that one supports Encapsulation, Inheritance and Polymorphism while the other does not. Both execute code in a linear fashion, and both allow code to be grouped into functions. The only difference is that with OOP the functions are called "methods" and related methods can be grouped into "classes" which can then be instantiated into "objects". Code can be shared using inheritance, and the ability for different classes to share the same method signature provides a mechanism called polymorphism. By using these three capabilities in the language I should be able to produce a higher volume of reusable code, thus shortening development times and lowering costs. The more reusable code you have the less code you have to write, and as the reusable code has already been used and debugged it also decreases the maintenance effort. As I only build database applications I created a framework to assist in the development and running of these applications, and I based this on similar frameworks which I had created in two of my previous languages. As database applications are written specifically to put data into and get data out of a database, with a bit of processing in between, I built my framework specifically to assist in this type of processing, so it makes no attempt to disguise the fact that it is communicating with a relational database.
After producing code which worked I decided to publish articles describing my techniques on the internet in order to share my results with others and hopefully add to the pool of knowledge. Imagine my surprise when I was told that everything I was doing was wrong!
I refused to accept that I was wrong for the simple reason that my techniques worked, and anything which works cannot be wrong. I was incorporating encapsulation, inheritance and polymorphism into my code and achieving the objective of producing more reusable code, so I could see no justification for their complaints. Having more reusable code provided me with the following:
Some of the arguments I was given to persuade me to change my unorthodox ways were devoid of any substance.
Real OO programmers don't do it that way or
That is not how it is supposed to be done simply cannot be justified as they are devoid of details. I want to know what the problem is. To be informed that there is some sort of a theoretical problem is not good enough.
Some of the explanations I was given simply did not add up. Take the following statement, for example:
Having a separate class for each database table is not good OO. Abstract concepts are classes, their instances are objects. IMO The table 'cars' is not an abstract concept but an object in the world.
Surely the concept of a database table is abstract, while the 'cars' table is a concrete instance of that abstraction. Then why is it wrong to have an abstract table class which is inherited by every concrete table class? Any code which is common to every database table can therefore be defined in the abstract class and shared by every concrete class. How can this be the wrong use of inheritance?
Some of the reasons were to get round a problem which I did not have. For example, the rule "favour composition over inheritance" was only invented for those retards who made a complete dog's dinner of inheritance and created deep inheritance hierarchies by inheriting from one concrete class to create a completely different concrete class. I never do this, which means that I don't have that problem, which also means that I have no use for that solution.
Some of the explanations are expressed using terminology which is designed to show the cleverness of the author but which is lost on us mere mortals. For example, take the following statement:
High-level modules should not depend on low-level modules. Both should depend on abstractions.
Abstractions should not depend on details. Details should depend on abstractions.
I have been doing OOP for over 10 years, and I have learned to look up the meaning of words in wikipedia. The term "abstraction" has two possible meanings - a verb and a noun, where the latter is the result of performing the former. An abstraction is a concept from which concrete examples can be created or instantiated, which leads me to believe that in computer science the act of performing an abstraction should result in the creation of an abstract class from which a number of concrete classes can be instantiated. The description above is actually used for the Dependency Inversion Principle (DIP), but nowhere does it mention either of the critical aspects of "dependency" or "injection" which is what that principle is all about. Nowhere in the implementation of that principle does it mention the need for any abstract classes, so that description is therefore both meaningless and misleading.
Some principles and design patterns are designed for a specific set of circumstances, and unless those circumstances exist in your software then implementing that particular principle or pattern could end up by doing more harm than good. Adding in code to deal with a problem which you don't currently have and are unlikely to have is nothing more than a waste of time. See YAGNI for details.
Some principles are overused by some people simply because they don't know when to stop. For example, I use SRP up to a point, but I cease and desist when it starts to break encapsulation and destroy cohesion. Other less intelligent souls keep extracting until there's nothing left to extract, but they fail to see that the result is a huge set of anemic micro-classes which resemble ravioli code and which can bloat call stacks and make navigation through the code for maintenance purposes much more difficult.
The purpose of a software developer is to develop cost-effective software for the paying customer, not to impress other developers with the cleverness or "purity" of their code. Producing software in a timely manner which is good enough for the customer should be more important than wasting time in attempting to produce software which is perfect in the eyes of a few developers. The concept of good enough is easier to define and easier to deliver than perfect, so that is where I prefer to concentrate my efforts.
When people like Hall of Famer tell me that because I don't follow "the standards" that my code must be crap I just have to shake my head at their naivete. I am not breaking "the" standards as there is no such thing as a single set of standards which is universally followed by all programmers. Such a document has NEVER been published. There is no "one size fits all" when it comes to programming standards as different groups of programmers will use whatever suits them best. Some programmers adopt a pragmatic and moderate approach while others, those that I refer to as the Paradigm Police or the OO Taliban, lean more towards the dogmatic and the extreme. Besides, the only real rule in programming is to write software that works and which can be understood and maintained by others. Everything else is just icing on the cake, a nice-to-have, an optional extra. They are guidelines, not rules. Following them will not guarantee that your software will succeed any more than not following them will guarantee that your software will fail. It is not which principles or rules you follow but how you follow them which is important. You need to know when the circumstances are appropriate and benefits can be gained by following a principle, but just as importantly when the circumstances are not appropriate in which case the benefits could be zero or even negative.
Here endeth the lesson. Don't applaud, just throw money.
Here are some other heretical articles I have written on the topic of OOP:
|10 Jan 2021||Added Rule #43 and Rule #44 to Look at how many rules I break.|
|01 Mar 2018||Added Rule #42 to Look at how many rules I break.|
|01 Dec 2017||Added Rule #41 to Look at how many rules I break.|
|03 Aug 2017||Added He does not understand that an abstract class may contain non-abstract methods
Added Rule #40 to Look at how many rules I break.
|14 Apr 2017||Added Your singleton breaks encapsulation.
Added Singletons kills polymorphism.
Added MVC is an incomplete architectural pattern.
|02 Apr 2017||Added He does not understand what "efficiency" means
Added He does not understand what "productivity" means
Added He does not understand what "coupling" means
Added He places limitations on how to use an "abstract class"
Added He does not understand the difference between "common practice" and "best practice"
|05 Mar 2017||Added Rule #36, Rule #37, Rule #38 and Rule #39 to the list in Look at how many rules I break.|
|01 Feb 2017||Increased the number of topics in Look at how many rules I break from 16 to 35.
Added The Faulty Logic of "Hall of Famer"