Let me start with some universal truths:
While there are some practices which I regard as universally applicable this is because they are not tied to any any particular programming language or paradigm, nor do they have any preferred or even suggested implementations. They are general philosophies rather than itemised rules and all they require is sensible interpretations from people who can think for themselves rather than those who can only follow the direction set by others.
The topic of "best practices" is often talked about as if it were a set of rules that is cast in stone and universally accepted by every programmer on the planet, but that is not the case. What some people regard as being best for them others will dismiss as second-best or even nowhere-near-best. In the 1980s and 90s I worked in several teams for different organisations, and each team had its own set of programming standards. Each of these had areas which I liked and some which I disliked. The idea of a single set of universal standards simply did not exist, mainly because getting a group of programmers to reach a consensus on the definition of "best practice" is like herding cats, with the likelihood of success being inversely proportional to the size of the herd.
During my time as a junior programmer I began to formulate my own set of standards by sifting through what I had encountered and filtering out any practices which I saw as getting in the way of productivity. I did not follow a rule as if it was cast in stone, I examined it, and if it did not measure up to my expectations I discarded it in favour of something better. By "better" I mean something which which increased my levels of productivity in the most cost-effective manner. This means that I am results-oriented, not rules-oriented. I am a pragmatist, not a dogmatist.
In those decades I was mostly employed by software houses where we designed and built bespoke solutions for different clients, and as we had to compete against other software houses for business we had to demonstrate that we were the most cost-effective, that we had higher rates of productivity. When I became team leader in Prolog System Limited there were no formal standards, so it was a free-for-all. This was unacceptable to me, so I made my personal standards (which can be viewed on my COBOL page) available to everyone in my team. After very little training they all saw a decrease in program bugs and an increase in productivity. This pleased my bosses so much they formally adopted my personal standards as the company standards.
I have built enterprise applications in COBOL, UNIFACE and PHP, and because they are different what works in one language may not work in another. While the objectives remained the same, the means of achieving those objectives were different in each language. In other words "what needs to be done" was consistent, but "how can it be done" has an infinite number of variations, and each team has its own version of "how" based on their set of experiences. It is one thing to learn the capabilities of the language, but it is another to learn how to utilise those capabilities to best effect. Whenever a new version of the language was released I took the time to examine any new features, and if any looked as if they could add value to the code I would evaluate them to compare the costs against their benefits. Anything which enabled me to replace duplicated code with reusable code, or enabled me to do something with less code, and hence increased my productivity, was always beneficial.
When I switched to PHP with its Object Oriented capabilities at the start of this century I carried on with this habit. After learning the mechanics of Encapsulation, Inheritance and Polymorphism I began writing code which incorporated high cohesion and loose coupling to produce the best results with the least amount of effort, thus contributing to higher levels of productivity. After completing several modules I would often see repeating patterns that could be turned into code that could be reused instead of being duplicated. Another practice I follow to reduce the amount of code which I write is not to write code that is not necessary, to always choose the simplest solution instead of a complex one which invariably ends up by resembling a Rube Goldberg machine.
I didn't follow what was later pointed out to be "best practices" for the simple reason that I didn't know that they existed. Fellow developers began to ridicule my work by saying You are not following *this* principle or *that* principle, therefore your work is wrong and inferior.
When I examined these so-called "superior" techniques I detected a huge flaw in their arguments - the results which they achieved were not as good as mine, for reasons such as:
By following only those practices which actively contribute to my high levels of productivity (what used to take me 1 week in COBOL and then 1 day in UNIFACE I reduced to 5 minutes in PHP) I have found myself discarding quite a number of practices and principles which other programmers regard as being sacrosanct. They criticise me for having the audacity to break their precious rules while totally ignoring the results which I have achieved. I single-handedly designed and built the RADICORE framework which I released as open source in 2006, and I used this framework to single-handedly design and build TRANSIX, my first ERP package application, in 2008. This has now grown into the GM-X Application Suite which has been sold on three continents.
The following ideas are not tied to any any particular programming language or paradigm, nor do they have any preferred or even suggested implementations. They are general philosophies rather than itemised rules and all they require is sensible interpretations from people who can think for themselves rather than those who can only follow the direction set by others.
You cannot move mountains if you believe them to be mountains.
You must think of them as collections of small stones,
Which can be moved one at a time, and then reassembled.
-- The Tao of Meow
Programs must be written for people to read, and only incidentally for machines to execute.
Trying to squeeze multiple statements onto a single line, or using symbols instead of words because you believe that PHP is too verbose is not a good idea. I would rather spend 5 minutes in writing a piece of code which can be understood by a stranger in 5 seconds than the other way around.
OO is routed in those best-practice principles that arose from the wise dons of procedural programming. The three pillars of "good code", namely strong cohesion, loose coupling and the elimination of redundancies, were not discovered by the inventors of OO, but were rather inherited by them (no pun intended).
Cohesion is the degree to which the responsibilities of a single module/component form a meaningful unit. High/strong cohesion is considered to be better than low cohesion.
Coupling is the degree of interaction between two modules. Whenever you have one module calling another you have coupling. Loose coupling is considered to be better than tight coupling as it reduces the Ripple Effect when a change to one module requires corresponding changes to other modules.
The elimination of redundancies can have several flavours:
Ever since I started publishing articles regarding my experiences with PHP other programmers have told me that the code which I write and the methods which I use are completely wrong simply because I am not following "best practices". By this they mean that I am not following the same set of practices as they are, but I see no reason why I should. I have worked with many teams in many organisations, and the only thing in common with what they called "programming standards" was that every team had its own set. When I became team leader in the 1980s the practices which I had been accumulating and documenting were formally adopted as the company's COBOL standards. My methods cannot be wrong simply because they work, and something that works cannot be wrong just as something which does not work cannot be right.
I did not follow these "best practices" for one simple reason - I did not know that they existed. When I became aware of them and examined them I quickly realised that by changing my code to conform to this "advice" it would have a negative effect on my productivity by using code that was more complicated than mine, more convoluted than mine and less efficient than mine, so I did the only sensible thing which was to ignore it, to consign it to the dustbin, to flush it down the toilet.
Every time I am told "you should be doing it this way" my first response is to ask "Why? What bad thing happens if I don't? What good thing happens if I do?" If there is no proof, preferably with a code sample, that following the rule has benefits then I see no reason why I should follow it.
To counter my detractors I have written a number of blog posts which identify the "best practices" which I choose to ignore as well as explain precisely why I ignore them and why I believe that my own practices are better. These are listed below:
The "need" for an ORM is caused by deliberately using one methodology to design the database and a separate methodology to design the software, thereby causing a mismatch. My solution is not to create this mismatch in the first place. I design the database first, then build the software around that design using my framework.
Some programmers insist that I use a complicated mechanism to inject every dependency even when there is not a selection of dependencies to choose from. Dependency Injection (DI) cannot be achieved without polymorphism, and the more polymorphism you have the more opportunities you have to share code using DI. Every Model class in my framework shares the same set of public methods, and as each Controller calls these methods it means that any Controller can be reused with any Model. While I do inject entities into services where there are multiple choices, I do not inject entities into entities as there is never more than a single choice.
I only follow those principles which I consider to be appropriate.
I only follow those principles which I consider to be appropriate.
Singletons can be implemented in one of two ways - either as a separate method within each class, or as a static method within a single singleton class. The latter choice does not have the problems encountered in the former, so those coders who think that singletons are bad are obviously using the inferior implementation.
Getters and setters (also known as Accessors and Mutators) grew out of the idea that every entity which has multiple items of data, such as a database table and its collection of columns, must have a separate class property defined for each of those columns. This then forces those columns to require separate pieces of code to put data into and get data out of the object. This is not how any of the DBMS systems which I have ever used, whether hierarchical, network or databases, have operated, nor is it how the various programming languages have dealt with their data. In languages such as COBOL table columns were never defined or addressed individually, they were always defined in aggregates (also known as records, blocks, structures or composite data types).
When I was building the prototype application which provided the basis for my later framework I noticed very quickly that PHP also had the ability to deal with aggregated data in the form of arrays. I also noticed that data being submitted from an HTML document was presented in the form of the $_POST array and the data being retrieved from the data was also presented as a FETCH array
In each of my table classes I do NOT have a separate property for each column with its own getter and setter, I pass all the data around, both in and out, in a single array argument which is precisely how it is handled in the HTML front end and the SQL back end. This reduces the amount of code which I have to write, and directly contributes to loose coupling which is supposed to be a good thing.
I don't use object interfaces as they were invented to get around a restriction in statically typed languages. PHP is not statically typed so it does not have this restriction, therefore adding code to avoid a restriction which does not exist would be a violation of YAGNI. Besides, I get better results from using abstract classes.
The "approved" technique is to handle associations and aggregations using custom code with each entity. I prefer to use standard solutions which I have built into the framework.
The rule "favour composition over inheritance" was formulated by someone who never learned how to use inheritance properly. Instead of inheriting from one concrete class to create a different concrete class you should only ever inherit from an abstract class. As well as giving instant access to reusable code it also enables the Template Method Pattern which lies at the heart of framework design.
As an argument against inheritance some bright spark argued that "Inheritance is a procedural technique for code reuse", but as someone who programmed in COBOL, the most widely used procedural language of all time, I know that this is not true. Instead I contend that it is object composition which deserves that description.
Namespaces were added to PHP to solve a problem when importing third-party libraries into your application as there may be name clashes between that library and your application. This is because the library was developed without any knowledge of your application, so any name clashes will produce an error which won't be noticed until after the library has been installed. All those libraries which are imported using Composer would have been built using namespaces as standard. It should be obvious to even a junior programmer that if every third-party library automatically includes namespaces then there is no need to include them in your application as that would be overkill. It should also be obvious that PHP itself does not need to use namespaces as any name clashes would be reported when the code being developed was run, and would have been fixed before the code was released.
It may come as a surprise to some of you numpties out there but RADICORE is not a library which can be installed using Composer, it is a framework which is an application, and as such forcing it to use namespaces would not offer any benefits and so would be a violation of YAGNI.
Autoloaders are the solution to the self-inflicted problem of requiring multiple files to be loaded from multiple locations before a class can be instantiated. This arises because far too many junior programmers do not understand the meaning of encapsulation and high cohesion.
I avoid the problem by only have one class file for each table which is always stored in a standard location, and that file contains all the methods which are required by that class. The idea that a class should not contain more than X number of methods was obviously dreamed up by some schoolboy who can't count above ten without taking his shoes and socks off. Provided that the principle of high cohesion is maintained I do not recognise any limit on the size of a class.
The idea that "decoupling" your software is a good idea shows a complete lack of understanding of the term "coupling". If there is a call from one module to another then they are coupled whether you like it or not, and the strength of that coupling is either tight or loose. To "decouple" means to remove the call. To introduce a third module to act as an intermediary between the first two does not remove the coupling, it actually doubles it by replacing one method call with two.
PHP was designed from the outset to be dynamically and weakly typed, and it was provided with the ability to perform automatic type coercion which could convert any string value into a different type depending on the context. An error is produced only when the conversion fails, and that can only happen when the developer fails to sanitize the user's input. The idea that forcing every developer to insert additional code to perform this type casting manually, thus replacing the automatic conversion, could only have been dreamed up by a dogmatist and a pedant, someone who does not understand the meaning of pragmatism and efficiency.
Strict typing was only supposed to be turned on for user-defined functions when the developer deliberately used the declare(strict_types=1)
directive, but those stupid core developers make a cock-up and forced it to be turned on automatically for all internal functions.
Forcing developers to insert lines of code which are not necessary is therefore a violation of YAGNI.
I have read quite a few articles recently extolling the virtues and advantages of value objects in PHP. I have read what they have to say, and as far as I am concerned the whole idea is complete bunkum as all they are doing is introducing a complicated mechanism to achieve something which is ridiculously simple. The notion that it is "better" to create an object for each value, especially one that encapsulates it business rules, just demonstrates that they have a perverted view of what the word "better" actually means. Does it produce less code? Certainly not. Does it reduce the possibility of errors in that code? Certainly not. In fact, the increase in the amount of code you have to write simply increases the possibility of errors creeping in. If you don't believe me then consider the following:
I develop nothing but database applications, known as enterprise applications, and my current ERP application contains 500 tables and 5,000 columns. If I were to create an extra class for each of those columns that would result in 5,000 extra classes, extra code to load and instantiate those 5,000 classes, plus whatever code would be necessary to call the methods on those 5,000 objects.
Considering that I can handle all those columns WITHOUT the need to create, maintain, instantiate and call methods on all those value objects, then I consider the whole idea of value objects to be a complete waste of time and a violation of YAGNI. Not only that, I have never seen anybody identify the problem for which value objects are the solution, and it is a policy of mine that if I don't have a problem then I don't need a solution.
An enterprise application may have to deal with different business areas for that organisation, such as Order Processing, Invoicing, Inventory/Stock Control and Shipments. Although each of these areas handles totally different data and totally different business rules, these differences can all be handled in a similar fashion. All the data is spread across numerous database tables, and each table can be handled in exactly the same way. The business rules will be different, but they can all be handled in the same way. By building these similarities into a separate system with components that can be shared, each business area can then be regarded as a separate sub-domain or sub-system as part of a larger domain or system.
The RADICORE framework was built to develop database applications as it provides pre-written components to deal with all the similarities. Each subsystem is developed as an extension or add-in to the framework and then run under the control of the framework.
In the previous section I identified those so-called "best practices" which I ignore. In this section I will describe my path to a set of practices which have yielded better results.
Everything started with the following basic observations from my previous experience of developing database applications using COBOL and UNIFACE. Note that during this time I had worked with a mixture of hierarchical, network and relational databases.
Smart data structures and dumb code works a lot better than the other way around.
In a large ERP application, such as the GM-X Application Suite, which is comprised on a number of subsystems, each subsystem has a unique set of attributes:
Despite the fact that these two areas are completely different for each subsystem, they each have their own patterns and so can be handled using standard reusable code provided by the framework:
Switching to a new language is not just about switching to a different syntax, it sometimes means switching to a different way of writing and structuring your code. With COBOL we used an ordinary text editor, so there was no debugger. The code we put into a single source file had a data division for data definitions, and a procedure division for code. The code itself consisted of statements, sentences (one or more statements terminated with a period), paragraphs and optional sections. The PERFORM statement was for performing a paragraph or section within the current source file while the CALL statement was for passing control to a subprogram/subroutine which was compiled from a separate source file. As a junior developer I had been taught to create a small number of monolithic programs which were large and complex, but later on, while developing my first Role Based Access Control system, I learned that a larger number of smaller but simpler programs were easier to build and maintain.
UNIFACE was totally different. Instead of a text editor we had to use a special piece of proprietary software known as a Graphical Form Painter (GFP) with which we built form components which referenced entities within its Application Model, an internal database. Entities (tables) were first constructed within the Application Model to define their columns, indexes and relationships, then exported to produce CREATE TABLE scripts in the chosen DBMS. Proc code was entered into various pre-named triggers where different triggers could be fired depending on different mouse movements. Each form component was compiled in the GFP and then run via the UNIFACE Runtime Engine. The early versions were 2-tier as the form component combined the user interface and the business logic into a single unit with all database access logic provided by a separate database driver which was provided by UNIFACE, where there was a different version for each DBMS. We never had to write any SQL queries ourselves as they were all generated by the database driver. Unfortunately each entity in a component could only build SQL queries for that particular table, which meant that it was not possible to build queries with JOINs to other tables, thus producing what is known as the N+1 Problem.
UNIFACE v7.2.04 allowed a form component to be split into two by separating the user interface from the business logic which was moved to a separate separate service component, one for each entity, thus converting it from 2-tier into 3-tier. The transfer of data between the outer presentation layer and the middle business layer was via XML streams. This version also introduced the idea of Component Templates.
UNIFACE was originally developed to create desktop applications, but when they saw the growing demand for internet applications they decided to add in the capability to create web pages. I found their approach to be too clumsy and too clunky. I only worked on one project which attempted to use this new capability, and it was a complete and utter disaster. I decided that UNIFACE was the wrong choice when it came to building web applications, so I looked for a different language that was fit for this purpose. That is when I discovered PHP. I played with it, I liked what I saw, (especially the ability to Generate dynamic web pages using XSL and XML), so I decided to build another version of my development framework to see if I could boost my levels of productivity.
PHP uses scripts which can be constructed using a simple text editor, but I prefer to use a full blown Integrated Development Environment (IDE) with colour coding and debugging. PHP is a multi-paradigm language in that you can write code which is completely procedural, or you can use objects if you want to. It does not force any particular structure on the developer, so you can choose whatever structure you like whether that is monolithic or multi-tier. I found PHP very easy to learn due to the quality of the online manual and the sample code which was available in numerous online resources and books. I was particularly impressed with arrays which could be indexed, associative or multi-level and which were much better at handling collections of data items, such as database data, than the records or aggregated data types that I had used before.
I quickly accustomed myself to the fact that a PHP script could be activated either by an HTTP GET request from the browser's address bar, or an HTTP POST from an HTML form. I learned how to overcome PHP's stateless nature by using the session handling functions, and I learned how to activate another script from within the current script using the header('location ...') function.
While I had heard about this thing called "object oriented programming" during my work with UNIFACE I hadn't a clue what it meant or what made it so special, so I did some research on the interweb thingy. I found a few descriptions which used vague and meaningless terminology, but then I came across the following description:
Object Oriented Programming entails writing programs which are oriented around objects, thus taking advantage of Encapsulation, Inheritance and Polymorphism to increase code reuse and decrease code maintenance.
After looking at the descriptions in the PHP manual I identified the distinguishing features, those which exist in OO languages but not in non-OO languages, as follows:
The act of placing data and the operations that perform on that data in the same class. The class then becomes the 'capsule' or container for the data and operations. This binds together the data (known as 'properties') and functions (known as 'methods') that manipulate the data.
The unique feature of classes is that you can group several functions, but you have to instantiate that class into an object before you can call any of its methods, as in $object->method()
. The only difference between a procedural function and a class method is that a class method can reference the pseudo-variable $this
. Note that while it is possible to call a class method statically without instantiating it into an object first, as in class::method()
, this does not qualify as "object oriented" as there is no object involved.
Once an object has been created you can call a method to load data into its properties, and that data will remain in that object even when that method finishes. This means that you can call other methods to add/change/read that data. This is not possible with a procedural function as any stored within that function simply disappears when that function finishes. This also means that you can create multiple instances of the same class, and each of those instances can hold different data. The data inside an object will be available until the object is deleted.
I do not accept the description that encapsulation is about information hiding, which could mean implementation hiding and/or data hiding, for the following reasons:
One purpose of APIs is to hide the internal details of how a system works, exposing only those parts that a programmer will find useful, and keeping them consistent even if the internal details change later. An API may be custom-built for a particular pair of systems, or it may be a shared standard allowing interoperability among many systems.
Documentation for the function will identify its signature with a description of what it does and how to use it, but it does not identify its implementation, the code behind the signature, as that may change over time. How it does what it does is of no concern to users of that function, just that it does it, quickly and reliably.
For more details please also refer to What Encapsulation is NOT.
The reuse of base classes (superclasses) to form derived classes (subclasses). Methods and properties defined in the superclass are automatically shared by any subclass. A subclass may override any of the methods in the superclass, or may introduce new methods of its own.
Note that I am referring to implementation inheritance (which uses the "extends" keyword) and not interface inheritance (which uses the "implements" keyword).
The way to avoid any problems is to only ever inherit from an abstract class. If you inherit from a concrete class it may contain implementations that you do not want, and it is not possible to "un-inherit" an unwanted method.
Note also that I do not create deep class hierarchies just because some objects share a common attribute.
For more details please also refer to What Inheritance is NOT.
Same interface, different implementation. The ability to substitute one class for another. This means that different classes may contain the same method signature, but the result which is returned by calling that method on a different object will be different as the code behind that method (the implementation) is different in each object.
While the PHP manual on Classes and Objects described encapsulation (how to create a class) and inheritance (using the keyword extends), it did not have a description for polymorphism nor how it could be used, so I had to work that out for myself.
For more details please also refer to What Polymorphism is NOT.
At this time I was not aware of all those things called "best practices" which I was supposed to follow, so I went ahead to see how I could take advantage of encapsulation and inheritance (with polymorphism still be a mystery) in order to fulfil the aims of OOP which is to increase code reuse and decrease code maintenance
. My previous experience had already taught me that the more reusable code I had at my disposal then the less code I had to write, and that being able to achieve results by writing less code was the path to higher productivity.
There were several features I found in UNIFACE which influenced my approach to rebuilding my framework in PHP:
I experimented with ways to implement these ideas in PHP by first building a small sample application as a Proof of Concept (POC) using the following steps:
This is how I incorporated Encapsulation into my framework.
"10"
is acceptable for an integer column in the database the string "10 green bottles"
or "bottles, green, 10"
is not as it will return FALSE when given to the is_numeric() function. This is where my $fieldspec and $fieldarray variables came in handy as it made it possible to create a standard validation object which could compare a column's value from one array with its specifications in the other array. This meant that I did not have to insert any code into a Model class to perform this basic validation.Figure 1 - A typical Family of Forms
Note that each of the objects in the above diagram is a hyperlink.
Figure 2 - MVC and 3 Tier Architecture combined
Note that each of the above boxes is a hyperlink which will take you to a detailed description of that component.
This is how I incorporated Inheritance into my framework. Note that I did not attempt to inherit from a concrete class as I instinctively knew that this could cause problems. I avoid all possibility of such problems by only ever inheriting from an abstract class, thus making the mantra favour composition over inheritance
totally obsolete.
<?php require 'classes/foobar.class.inc'; $object = new foobar; $fieldarray = $object->insertRecord($_POST); if (empty($object->errors)) { $result = $object->commit(); } else { $result = $object->rollback(); } // if ?>
into two scripts containing this:
-- a COMPONENT script <?php $table_id = "foobar"; // identify the Model $screen = 'foobar.detail.screen.inc'; // identify the View (a file identifying the XSL stylesheet) require 'std.add1.inc'; // activate the Controller ?> -- a CONTROLLER script (std.add1.inc) <?php require "classes/$table_id.class.inc"; $object = new $table_id; $fieldarray = $object->insertRecord($_POST); if (empty($object->errors)) { $result = $object->commit(); } else { $result = $object->rollback(); } // if ?>
This is how I incorporated Polymorphism into my framework. If you look carefully you should notice that as every Model class inherits the same set of methods from the abstract class that any Controller which calls those methods can work with any Model in the application. This fits the description same interface, different implementation
as the same method call will insert records into a different table depending on which class file it is instructed to work with.
Note also that polymorphism on its own is not much use unless you have a mechanism to take advantage of what it has to offer, and that mechanism is called Dependency Injection (DI). For many years I did not know that I had invented a totally different technique for implementing Dependency Injection I as all the descriptions that I read always insisted that the dependent object be instantiated before it was injected, whereas all I do is inject the name of the class which is then instantiated internally.
As this prototype proved that all my ideas worked I then proceeded to build a brand new version of my framework which I called RADICORE. Full details can also be found in Evolution of the RADICORE framework. While building this framework I added to my library of reusable software in the following ways:
As a result of these efforts I have managed to create these Levels of reusability.
Having this amount of reusable software at my disposal means that when I am developing an application there is a huge amount of code which I don't have to write, which can be summarised as follows:
These are reasons why I consider some ideas on how to do OOP "properly" to be complete rubbish:
28 Nov 2024 | Added Some Universal practices |
16 Aug 2024 | Added Better practices |
27 Jug 2024 | Added I don't do Domain Driven Design |