What does Clean Code meant to you?

The very basic question

It is the very basic question for middle-level programmer (professional programmer with advanced skill but not yet a master). It has already been discussed maybe for decades in several discussion forums. Some of the source I had found is:
However we arrived to the basic question, what is a clean code actually? This is purely my opinion about clean code.

Is Maintainable / Clean Code is a Requirement to Your Apps?

Dirty code? That is code produced without considering the maintainability aspect. You can consider a code as dirty when there are tightly coupled, using arrays or map-based instead of data structures, or use hacks like global variables. One characteristic of dirty code, is when the application become large or complex, it is hard to extend or modify and prone to error while doing so. Is you application need the opposite (called clean code)? Not every apps need clean code, and here is why.

Floyd-Warshall in a Nutshell

This is article is intended to explain Floyd-Warshall algorithm (shortest distance finding algorithm), especially for those who new with this.

Floyd-Warshall Algorithm

As described by wikipedia, it is "a graph analysis algorithm for finding shortest paths in a weighted graph with positive or negative edge weights". Simply said, it is an algorithm which you can find the shortest path for all possible routes, given several possible routes with different "costs" between each route. As simple as it is.

Example

Say that we have 5 routes (1,2,3,4 and 5) like this:

1---2
|  / \
| /   5
|/   /
3---4

With the path cost as described like this (if you have difficulty in measure, just use meters or kilometers as substitution):
1 --> 2 = 2
1 --> 3 = 3
2 --> 5 = 1
2 --> 3 = 7
3 --> 4 = 3
4 --> 5 = 2

And we need the shortest route from 5 to 3. Using normal brain, we will use this sort of algorithm:
  1. Pick point 5
  2. Pick all possible routes, in this case:
    (1) 5-2-1-3 (the sum is 1 + 2 + 3 or 6)
    (2) 5-2-3 (the sum is 1+7 or 8)
    (3) 5-4-3 (the sum is 2 + 3 or 5)
  3. Pick the lowest cost, which is route 5-4-3, with cost of 5
That's it for the single-point destination calculation.

Using Floyd-Warshall Algorithm

Using Floyd-Warshall Algorithm, you can find all shortest path from the possible routes. There are some steps to do it.

Represent the paths into 2-dimensional arrays

The path from example above need to be represented using two dimensional arrays. For example, say that we want to map route no.2 into arrays, it will be represented like this:

{ 2, 0, 7, iNf, 1}

The first value (2), represent the route between point 2 and point 1. In array, it is represented as path[2][1]. The second value (0) it represent the route from point 2 to point 2, which is zero or no distance. The fourth one, iNf or infinite, represent the route from point 2 to point 4, in which cannot be achievable, and causing it to become infinite or not possible. In programming language, the infinite can be replaced by maximum number of int.

In short, we can represent the two dimensional array as this:

         from
       1 2 3 4 5
     ------------
   1 | 0 2 3 i i
   2 | 2 0 7 i 1
to 3 | 3 7 0 3 i
   4 | i i 3 0 2
   5 | i i i 2 0

(the i symbol represent infinite)

The pseudocode

The sinppet below is a pseudocode for Floyd-Warshall algorithm based on the case above.

for k from 1 to 5
   for i from 1 to 5
      for j from 1 to 5
         if dist[i][j] > dist[i][k] + dist[k][j] then
            dist[i][j] = dist[i][k] + dist[k][j]

This nested loop will iterate through each route / path, and compare with another path having the same point. It will see whether the other path has shorter cost than the initial, if it is shorter, then it will be swapped.

Say that now we have iteration of:
k = 3
i = 4
j = 1

Then the array can be replaced with: if dist[4][1] > dist[4][3] + dist[3][1]. Or the same as if iNf > 3 + 3. The comparison is true, meaning that the dist[4][1] will be replaced by 6, or the same as the cost of route 1-4-3. After running the logic, this is the expected result:

           from
       1  2  3  4  5
     ---------------
   1 | 0  2  3  6  8
   2 | 2  0  5  8  10
to 3 | 3  5  0  3  5
   4 | 6  8  3  0  2
   5 | 8 10  5  2  0

When comparing the result in this case, the shortest cost in route 5 to 3 is valued 5, which is fit with out first try.

Designing Systems, the Art and Pitfalls

This article mainly based from this stackoverflow question about designing system. As I have written before about learning by teaching, this is a good example that I see. Even though I had experience designing a system, but I still cannot define exact steps needed to design it. Now I have learned much and able to provide the explicit steps of designing a system, at least from my experience.

The High Level and Low Level Module

In the context of system (application) design, a high level module is an overview picture about how the system interacts with the user, and other integrated system. Since low level module is a detailed picture about how the system interacts between each other subsystems inside. That's it, a system design are divided between two modules.

High Level Module

We need to divide the design to separated modules, because it is hard to design a system without high level (overview of the system) module. High level module are more understandable by the business users. Moreover, there are many pitfalls beside system errors, such as wrong use case scenario and wrong business rule validations. Defining those pitfalls in high level module design is easier and faster. Who does not loves simplicity, faster, and easier job? That's why we should do high level module design.

Taken from my stackoverflow answer, about a standard point-of-sales system that has the following sub-modules:

  • ordering
  • commiting order
  • down payment
  • goods delivery
  • return

Here is the steps of defining high level module design:

  1. Define the standard use case between user and systems
  2. Pour the use cases to some collaborated diagram such as rich picture (or anything familiar)
  3. Define the exceptions use cases. If the exceptions can be defined easily, put it immediately to model. If not, mark the model with the case exceptions to be further discussed with business teams. Some use case exceptions can be changing committed order, changing committed order after down payment, cancelling payed order, goods out of stock, etc.
  4. Iterate the process. Usually step 3 can become step 1 (the exception can / will be another use case). For example the changing committed order can be a use case, since the change of occurring is high.
  5. When the 3rd is completed without additional use case exceptions (all use case has been handled), usually I add value-additional operations.
    Those operations can be notification (email / on-screen), historical data maintenance, reminder, error-handling, etc. Some operations can be another use case as well, so maybe you will need to iterate over to no.1.
    Some example maybe when you get error during down payment settlement, maybe you will need another use case to input the down payment data manually. Or maybe you will need to maintain reminder system in another system.
  6. Move to low level model
Well, each point can be separated as another discussion.

Low Level Module

Low level module design, on the other hand gives more detailed view in the systems and it shows how each of the subsystems work between each other. Many times, low level modules are overlooked by the management because it is far very faster to immediately begin to code than creating the low level module. Then what is the benefit of low level module design?

These are the benefits of low level module design that is often overlooked:

  1. It can act as a documentation
    Class diagram, database design, state diagram, flowchart, sequence. Everything can be taken as a technical documentation or "blueprint" of the system. Is it needed? Yes in most cases, usually in first step of debugging
  2. It catches pitfalls, errors and exceptions early
    Most of the time error and exceptions are being caught during integration testing. When during testing and find some of the error, you will review the general process of the system. At that time, it is too late because your code already been constructed with your database structures
  3. It design your code base clean
    Little hacks and tweaks are sometimes (most of the times) done to fix something during the testing time (see point 2). Having a low level module, you are forced to define some general structure of your code base, and pitfalls can be avoided early, making your code cleaner and less need to refactor
  4. It can be reviewed easily
    Discussing designs with peers using low level module design will be easier and faster, compared to reviewing code
  5. It can be used as basis of review and evaluation
    After the code has been completed, you can review the mechanism and structure with low level module design. This will help to find pitfalls or unfinished works earlier (before integrated tests)
Well, there are many benefits but often overlooked by management, because usually they only make schedules with waterfall model. That is, having the development going forward (from design, code, testing, publishing) without handling for exceptions in between (bug fix during testing, redesign during code, etc). And the benefit of low level module in a simple CRUD application seems overkill (even though nice to have) for most management, that in their consideration: "it is okay to have a buggy code published rather than having 40 hours of designing low level module.

Then how do you design low level module? Well, the answer lies in many books, such as UML guidance for OOP, etc.

Learning by Teaching

Docendo discimus or Learning by Teaching, is one of good method to improve yourself (or at least, myself).

Learning by teaching gives you better experience, knowledge and skills, and can be very useful, compared to learning by yourself or from other. It is because in order to teach someone, you will need to know the answer from the problem beforehand, or at least has an expertise in the case. Also, teaching requires you to be able to explain the method correctly and presenting the knowledge. In other words, converting Tacit Knowledge to Explicit Knowledge. Not other than that, you must proof your knowledge and defend it from any disagreements.

Has an Expertise in the Case

You cannot teach or giving knowledge if you do not has an expertise in the case. Some exceptions may be for seniority or positional power, but it is another topic. It means that if you already can teach, you already has some level of expertise in the case. It is a good indicator to measure yourself.

If you need to teach, answering question or giving knowledge, it means that you need to know the field and becoming an expert at that field too. It forces you to learn. Even when teaching or after that (evaluation) you can still learning from your teaching. It is a very good improvement.

Able to Explain

Sherlock Holmes said once in his book Study in Scarlet, It was easier to know it than to explain why I know it. If you were asked to prove that two and two made four, you might find some difficulty, and yet you are quite sure of the fact. It is not an easy thing to explain something that you know and most of the time, it is easier for you to understand it yourself.

That is one of good reason why learning by explaining is better than learning by yourself. If you already can teach or explaining the knowledge, it means that you already have the knowledge in a good level. If you do not know the knowledge itself, how can you explain it?

Defend from any Disagreement

Disagreement may comes from other source. The worst type of disagreement by comes from those who has better expertise in the fields (someone that has been respected as masters, such as Martin Fowler for OOP design). In order to prove that your knowledge is correct (or at least acceptable), you must have some ability to protect it from any disagreements. (well in this case, I don't want to mention the worst type of agreement in an organization, that is disagreement from people who has the power, and the disagreement comes from their taste themselves)

Some published books are good to be referenced as sources. Because if you cannot explain the knowledge well, or cannot defend it, you can use the reference as your shield. Published book is good because it is well written, mostly easy to understand and accepted by most people. Moreover, it is written by experts in the fields, improving the correctness (as described in the first point here). The authors itself, is already at a level that is able to defend their statements from disagreements.

Don't worry if you find that your statements cannot be protected. It means that you still need to learn. Moreover, you can learn from the disagreement as the starting point, and begin research from it. At the moment you know the facts that can be used to defend it, you begin to make statements again, and the process iterated itself. It only means that whether your statement can be protected or not, it has learning process in it, and it is good.

Conclusion

Learning by teaching is a good thing to do to improve yourself. Most of the time, you need to do some other types of learning before teaching, so it only leads to some process of learning.

Programming Idealism, Avoiding Hungarian Notation

Hungarian Notation

From wikipedia, hungarian notation is an identifier naming convention in computer programming, in which the name of a variable or function indicates its type or intended use. There are two types of Hungarian notation: Systems Hungarian notation and Apps Hungarian notation.
System Hungarian is intended to emphasize the variable's type. It is extremely useful in interpret / dynamic language such as javascript or php, and useless at all in static programming language. Especially in compiled oop language such as Java and C#, where data contract and type casting is the major problem, it has no benefit at all.
Apps Hungarian is intended to describe the functionality of given variable, regardless of it's type. As Joel Spoolsky has been explained in his article, there are some variable that is prone to error, even though already has compiled-type checking. One of his example is between unsafe and safe string (encoded html tags for example), in which the type is same but serve different purpose.
The article is posted at 2005. It means it already there for more than 7 years around. Given current ability of compiler and programming language, what can we do to improve the design?

Problem

There lies one and only one problem in Joel's solution, that is the code can still pass compilation phase. As stated by Mark Seeman in his article, faster feedback means less costs to correct errors. Ideally, it is the best when we can get all the system's error during compilation phase, but it mostly impossible for some reasons (such as parsing error or business rule error, in which cannot be caught by compiler). In short, you need to create compile error as much as possible to catch wrong code, rather than getting run time exceptions.

The Proposed Design

Using Joel's example for safe and unsafe string, we need to create a design where we can handle safe and unsafe string which can give compile error. By using C# syntax, as usual for oop language, first I define some classes. The class is for unsafe string.

public class DecodedHtmlString
{
    public DecodedHtmlString(string decodedString)
    {
        this.decodedString = decodedString;
    }

    private string decodedString;
    public override string ToString()
    {
        return decodedString;
    }
}

Simple enough. It gives no benefit but gives you a self-documenting data type. The class represent a html string in a decoded way, and no encoding happen here. Next, for the safe (encoded string).

public class EncodedHtmlString
{
    public EncodedHtmlString(DecodedHtmlString decodedString)
    {
        this.encodedString = System.Web.HttpUtility.HtmlEncode(decodedString.ToString());
    }

    private string encodedString;
    public override string ToString()
    {
        return encodedString;
    }
}

Again, a self explaining class accepting encoded string from a decoded string. Now we want to make both of the classes communicate each other. We have several options such as type casting or static parsing, which is easy enough in C# that I won't explain. In here I will do constructor injection and To type casting instead. For the DecodedHtmlString, we add a constructor and ToEncodedHtmlString method:

    public DecodedHtmlString(EncodedHtmlString encodedString)
    {
        this.decodedString = System.Web.HttpUtility.HtmlDecode(encodedString.ToString());
    }
    public EncodedHtmlString ToEncodedHtmlString()
    {
        return new EncodedHtmlString(this);
    }

And for the EncodedHtmlString side:

    public static EncodedHtmlString FromEncodedString(string encodedString)
    {
        EncodedHtmlString result = new EncodedHtmlString();
        result.encodedString = encodedString;
    }
    public DecodedHtmlString ToDecodedHtmlString()
    {
        return new DecodedHtmlString(this);
    }

Consumer

Let's see from consumer point of view:

string unsafeString = Request.Forms["CUSTOM_INPUT"]; // input from form
string safeString = System.Web.HttpUtility.HtmlEncode(unsafeString); // encoded safe string for reference
DecodedHtmlString decoded;
EncodedHtmlString encoded;

// initial creation
decoded = new DecodedHtmlString(unsafeString); // correct
encoded = EncodedHtmlString.FromEncodedString(safeString); //correct

// type casting
encoded = decoded.ToEncodedHtmlString(); // correct
encoded = new EncodedHtmlString(decoded); // also correct
decoded = encoded.ToDecodedHtmlString(); // correct
decoded = new DecodedHtmlString(encoded); // also correct

// wrong initial creation
decoded = new DecodedHtmlString(safeString); // wrong
encoded = EncodedHtmlString.FromEncodedString(unsafeString); //wrong

// to primitive
unsafeString = decoded.ToString(); // correct
safeString = encoded.ToString(); // correct

// wrong to primitive
unsafeString = encoded.ToString(); // wrong
safeString = decoded.ToString(); // wrong

We got 4 possible wrong code, that is from primitive and to primitive parameter assignment, and for other scenarios it is correct. Now let's see whether we can exploit the data type validation with parameter accepting data type.

public void WriteToDatabase(EncodedHtmlString encoded)
{
    string encodedString = encoded.ToString();
    // doing with encodedString
}

WriteToDatabase(unsafeString); // compile error
WriteToDatabase(safeString); // compile error
WriteToDatabase(decoded); // compile error
WriteToDatabase(encoded); // correct

Now we got 3 compile error and one correct code. If you favor to get a compile error, it is an improvement since now you can protect myself from 3 possible parameter assignment errors. And if you carefully using the two data types instead of passing from primitive string, it will be fine. The only two things that can pass the compile error is when casting to primitive, or from primitive.

But hey, isn't most of the operation (at least safe and unsafe string) is using primitive type? If we take account Response.Write and Database operations, it is very clear that most of the critical operation is using primitive type. (even for url, etc). Moreover, we add 2 more classes for this design.

Conclusion

We can get the design where we will receive compile error instead of run time error or buggy code. However, we still cannot get one hundred percent buggy-code free with this design, and most of the operations are error-prone here. Additionally, it introduces two dependent classes as well, making it more tight coupling.

In the end, it is still the framework's support that do the decide. If the framework support the Encoded and Decoded datatype by default, and suggesting you to use the datatype instead of primitives, maybe it is worth it. However, with current framework design, it is very unlikely for this design to give decent benefit.

The Ways of Debugging Part 1: Finding the Possible Root Cause

Background

Debugging can be easy for someone, and very hard for the other. After many years of debugging experience, I find that debugging usually consist of 3 big steps, that is:
  1. Finding the possible root cause,
  2. Prove the possible root cause to find the real root cause, and
  3. Fix the real root cause
Additionally, debugging hardware has many similarity with debugging software, in which also has the 3 big steps above.

Finding the Root Cause

Finding the root cause is the very first step in debugging. In my opinion, it is also the most determining step. The total time required for debugging usually be heavily determined by finding the root cause. It is harder to find the root cause rather than fix the system.

In order to find the root cause, the very first requirement is to understand the system you are debugging. I have once need to debug a system in which I do not understand at all, and it takes the time for me to understand the behavior of system first. Only after that then I can finally continue to find the possible root cause.

Moreover, don't be surprised. Sometimes user will actually submit a bug, without steps to reproduce (especially in entrepreneur system, where the user usually know little of software engineering), complaining about a behavior that is already behave as designed. Without any documentation (that's right, no documentation at all!) about how the system behave, I need to find a person who know the system's behavior, to determine whether it is a bug or not. This is, why at most of the time, usually the tester can find the possible root cause easily, because they know well how the system behave.

Understanding the system first can also be very critical for finding some technical flaws. There flaws are such as race conditions, replication issues, different regional setting, different input format, hacks or security flaws, etc. Right, bad system design can be a root cause for bugs, with additional factor like not understanding the system, it will give you a headache.

Another requirement for finding the root cause is having decent technical knowledge about the platform or programming platform the system use. Some platform used by the system such as the database and application, usually has different behavior between each other. Say, for example, java mark the class and method as virtual by default, and in C# you need to specify with the "virtual" keyword.

Conclusion

Understanding the system and having knowledge about the platform will give you significant boost in time needed to find the possible root cause. I have experienced that I have once finding a possible root cause for a bug in system for only several minutes. It is, of course are made possible by the having knowledge of the system.

Much or less, documentation about how the system behave will help newcomers or debugger to find the possible root cause. Not only it will help to find the possible root cause, the handler can instantly know whether the bug raised by user is actually the system's design or not, or they need to configure something in for the user in order to able to do the required action.

Debugging Hardware Problem

Today, I boot the PC. The boot process goes smoothly until the windows trying to load. Suddenly a blue screen appear and the computer restarted.

If you ever have a pc for 3 years or more, you may experience similar situation as well. PC unable to boot, sudden restart in middle of work, or something like that may happen due to hardware failure.

An experienced one already know that hardware problem can be happened in many parts of hardware, such as CPU, RAM, hard disk, VGA, motherboard, power supply, etc. Similar with software debugging, finding a part of hardware which is the root cause can be a trouble. I find the debugging hardware problem is similar with one in software.

During the booting after my PC restarted, I try to run the windows in safe mode. Strangely, it can run well. Some common applications such as office and browser can be run normally, with the minus of good display and network support due to safe mode.

Normally, you cannot even able to boot at all when the CPU or RAM get any trouble. From this point, I believe that the RAM and CPU is still in good shape, eliminating two error possibilities. Troubled power supply normally cannot affort to boot the system too and not causing blue screen error. So, we get another parts eliminated, leaving both motherboard and VGA alone.

Both parts become the "possible root cause" which cause the error. From this forward, I begin to test the "possible root cause" to be proven. This step is meant to prove whether the possibility is actually the real root cause.

Luckily, my possibility is being strengthened by windows error message in safe mode, showing bcc code 116. After simple searching in Google (thanks Google), I can easily found some article mentioning bcc code 116 related to video graphic error. Now only one thing to prove: whether the system will boot up without using the graphic card.

So I start to reach device manager, disabled the display adapter, and begin restarting the PC. The result is, viola! The system successfully loaded. The application runs well, the browser connected to the internet, and nothing has problem except a poor - low resolution display. And a little lag because the rendering are not being done in graphic card anymore.

The suspect left to motherboard or VGA, which I haven't found the cause yet. That is because I do not has spare change for it. But the bigger suspect is VGA, because it is older than the motherboard.

Conclusion

Finding the cause of hardware error is very similar with debugging software. It is started by finding the possible root cause, proving the possibility, and then fixing the problem. Experience and knowledge also help in both cases, to speeding the discovery of possible root causes. And for both cases also, you need to know how the system behave / working, or you will need additional time to find how the system work.

Not All Architecture is Fit for Your Apps

I had an interesting discussion over stackoverflow with L-Three in this question. I realized that it is quite an interesting situation there, so I think it need to be blogged. I'm not yet experienced enough in Dependency Injection, so my statements may be mistaken though.

In short, he is advising to use a well structured architecture. That architecture is using some several good techniques, like dependency injection and comand-query separation. Ad a moderate programmer, I can say that the structure is good, clean, easy enough to test and extendable. But I don't like it. No, not because the design is good, but I have some conditions where the architecture can't be applied.

Interface Programming: Entity Wrapper to Handle Dynamic Source Object

Background

During working with legacy code, I have found that many people used DataTable/DataSet instead of strongly typed objects. They are using some code like
string id = row["id"].ToString();
instead of
string id = request.Id;

It is becoming a maintenance hell because of several reasons:
  • I do not know the data type from database, so I need to debug into database procedure
  • I do not know whether the data is nullable or not, again I need to debug into database
  • When I need to change the data type, I need to search for every implementation, change it and make sure it does not break 
And last but not least, if I want to make enhancements or modifications, I have been faced with 2 options:
  • Keep  the same programming style, using the DataTable, with the risk that you add another more maintenance hell object
  • Refactor it, with the risk of breaking is higher
I want to use Dependency Injection (DI) for my further development. Lack of strongly typed entities are of course prevent me from using DI. So I need to change the DataTable to strongly typed object before using the DI implementation.

Design, don't Code Yet

 Why - Risk at Development

As a programmer, sometimes I doubt whether I should wasting time to think and design about the application that I will develop or not. As a single programmer-architect, there are some self-defined projects where I usually start by code first or by design first. Logically, they should have produced the same result, thinking that the developer and the architect is the same person. Practically, I'm surprised that the project started with code first is tend to have more risk and more likely to be stopped than the one by design-first.

So, logic does not apply here? Yes it is. The reason is basically that the developer is human. and they will likely get bored because of several reasons:
  • The project does not has exact requirement and scope
  • The project does not has exact release strategy
  • The project is most likely isn't needed by the user

The project does not has exact requirement and scope

Once, I have tried to create a so-called "ideal-best" application. The application should be able to handle many kind of business process. That application will be free of bug, easily extendable and has good architecture foundation. And the application can work as both transaction handling or event high level management reporting tools.

It sounds like a good plan at the beginning, however with such a big regards I need to drop the development because I got bored during developing it. It has no exact scope, no exact plan about what I must develop, what I must validate, how is the process after doing this and that, etc. The scope is growing and growing each day I think about the application, and the development cannot follow the planning growth. You have not target to accomplish, and caused you to loss interest in the development.

The project does not has exact release strategy

 What I mean about the term of release strategy here is a strategy about how to deliver the application. It consist of release date, the audience and the platform target. It may has more details than that such as how to replace the current running application without breaking, or how to not breaking other applications which is dependent to it; but it's regarding what kind of application that want to be delivered.

Having no release date deadline (target) can affect the development scope, since you will think like "I have unlimited time to develop this" or "I can add this and that feature before delivering the application, since the release date isn't being decided". 

Lack of audience target can also affect the scope, because you will try to create an application that can be used by any level of management (transaction level or event advanced-level ad-hoc reporting).

Lack of platform target can demoralize your development. You will be haunted by thoughts such as "will it works well in firefox, chrome, or IE?" or "will it works in other-windows operating system?". Thoughts like that will drag your development, because you will be bugged by how you will check them each time you make a modification. Don't be bugged by it!

The project is most likely isn't needed by the user

Any project needed by the user should has estimated release date. In terms of user, the faster the deliver date, the better. Sometimes you may think that this kind of application/enhancement will not be needed by the user. It can be because you can do manipulation to the database directly. This kind of thought can demoralize the development, since you don't know exactly how your application can give good benefits to the user. Don't develop any kind of application which won't be needed. Or if it will, don't ever think that the workaround (direct manipulation) can be the replacement of the application.

Conclusion

Always design your application first before do code. No matter how skillful programmer you are, the risk of not having the application designed beforehand is high. It can makes your effort go waste, and you got nothing from it, except wondering why this is happening. If you cannot do the design, as someone who is good at it. Asking experts in each field, for example accountant during finance application design or a headmaster during education application design. It can give you clear vision about what kind of application you want to develop, and the functionality.

Separation of Model in Design Pattern

Before talking about model, you can read about what is the "model" thing in MVC design pattern explanation. The simple explanation about model (my interpretation, don't use it in exams) is something which represent the structure of data, and possess the logic to get and/or modify the data.

Usually, model's logic can be integrated with the controller (or view model), and the structure itself can be represented using data sets (for database, or xml documents for xml). So in most cases, developers really can ignore model and integrated it with the controller itself. So why is it needed to separate the model?

If we said about small application, it will be okay to ignore model, and integrate it with the controller at all. But what if we talk about large applications? It will be hell if we use data sets or xml documents itself. A slight change with the data structure, and you must search for every controller which used that data. Yeah I already said every controller, and if the application has so many controller, it will be a pain.

Not only that, in additional model can hold some logic that bound to data, so every controller used the data can have same behavior of the logic. Let's say that a request has some mechanics like discounts or so. Instead of put the logic in controller or database, we can put it in model. So in summary, I will say that the model is quite a handy tool for data management.

The Popular MVC Design Pattern

If you need reason(s) why the desin pattern are needed in software programming, you can read mw previous post.

Honestly, at the first time I learnt this design pattern, I find it was a bit confusing. Moreover, I find it useless to separate model with the controller, even I can immediaetly find the importance to separate the view an controller. However after try to create a php project using codeigniter framework, I find the requirement are somewhat important.

Before talking further about MVC, let me tell you the basis of MVC. The view, to be simple are the user interface. It is related to everything what user sees, what user input, what user choose and logics of the UI to communicate with controller (in this case, form tag and ajax call are considered a view.

Controller on the contrary, receiving input from view, processing it with logics (if else, loop, mathematical logics, etc), getting the data from model, sending the data to model, and even choose what view will be displayed after all the process done.

Model is the object that you use in controller. Model which data will be displayed in view, which hold the logic to modify the data in storage (can be database, xml, pure text files, encoded file, etc), getting the data from storage, and hold the structure of data.

From that explanation, we can see that it is obvious to separate view with controller, in order to separate business logic with UI logic. But why is it needed for model to be separated with the controller, instead just handle the model (get and modify the data) in controller? We can get the explanation in this post.

Design Pattern, How Important is it

Design pattern is usually be used in software application programming. There are some design pattern which is used widely by enterprise, or insividual programmer. But how important is this design pattern 'thing'?

The main purpose of design pattern is to separate the application interface (UI) with the business logic. Why is it needed to do such thing?

In my latest job, there was a project which need to be handed over to me. The project are using Asp.Net webform. The structure of the project are using event-driven structure, as the basis of Asp.Net webform design.

The business logic (lets say that as the logic to submit a request, validate the form or updating the request) are being done in code behind of aspx.cs form. To be worse, the business logic sometimes handled in asmx webservice and being triggered by jquery ajax, making it harder for me to decrypt it.

Well, the pain did not stop there. The design are making it harder to be modified. A little modification can cause errors in other places, and more effort are needed to unify the change in other places as well. This is, are contrary with principal of object oriented, which is encapsulation and reuseability.

So how can a design pattern be used to solve these usually founded problems? I will try to describe it in my future posts.