Project Management and Marriage Life, they can be the same

Am I crazy, to compare project management with marriage life? Both of them are like apple and orange, you cannot compare them! There are many successful marriage without the need of project management!
Unfortunately, they both have more similarity than you have ever think. Those who have failed in marriage do not manage their marriage well. While for those who succeed, actually they unconsciously manage their marriage life well, without noticing. If so, what are the similarities between both of them?

Asp.Net MVC Dynamic Assembly Resolve

Background

In my scenario, I required 3 things for my application system (in fact, only the first one is mandatory):
  1. A set of library (framework) in which can be referenced by other applications
  2. Updating the version of the library to know deployment strategy or bug discovery. This means that minor, macro and micro version are updated regularly between publishing.
  3. The updated library should apply to all deployed systems
Up until now, in order to accomplish those three requirements, I had spent around 10 - 16 worth of working hours. A little waste I say. And to be honest, I still do not get the correct way to fulfill my needs.
The difficulty here in fact is the complexity of referencing the assembly, black box process of assembly resolving and inconsistency in application to resolve the assembly. Fortunately (and also unfortunately), this assembly complexity is only happen in web application (Asp.Net), and not winform (don't know about WPF though).
So what are the ways to resolve assembly that I had discovered up until now? N.B: the cons I stated below may be caused by insufficient of my knowledge and experience.

AppDomain.AssemblyResolve

One of the very first solution that I think will perfectly match my need is registering the AppDomain.AssemblyResolve event to reference the correct directory. It has some benefits:
  • You don't need to state assembly version. You just need to state the public key token and assembly name.
  • You can specify the location of the library.
  • Unlike GAC, no installation required for the shared library. You just need to dump the file in the folder.
  • The logic to resolve the assembly can be tracked by code, making debugging easier and new developer can know the assembly library location from code.
It really beneficial. You don't need any more effort in order to deploy a new installation, and the library can be separated from physical operating system. Moreover the version can be ignored, meaning that if you consistent in deploying only one version of library, then you are good.
However it also has several cons:
  • Intermittently the dll file can be copied into bin folder, rendering update useless.
  • Razor view engine does not call AppDomain.AssemblyResolve to resolving the assembly.
  • Web config compliation assembly reference don't call AppDomain.AssemblyResolve.
  • It must be signed with strong name.
  • Not consistent. Assembly update trigger cannot be determined.
Both of the cons make this solution cannot fulfill my needs, because I need every new deployment with updated version to be applied in all application. However even though if I can hack the version (not updating the assembly version), I still cannot overcome the second cons, that the assembly reference by razor view engine.

Installing the Assembly in GAC

Well, let's go to the point here, the benefit of using GAC:
  • Many experienced developer know how to use the GAC (it is standard).
  • It can be referenced by razor view engine.
  • The reference resolve is faster (not a big deal though).
  • It is consistent. The assembly update is triggered and can be known.
Then, what is the cons?
  • Assembly version need to be specified.
  • It needs to be installed with gacutil or predefined installer (not a big deal though).
  • The GAC location is locked and cannot be changed.
  • It must be signed with strong name.
Well, despite of it's cons, this solution can be used to meet my needs though. I just need to prevent the assembly version to be updated, and just updating only the file version. Multiple version assembly can also be used with this way to prevent breaking change, but my experience say that multiple version assembly can lead to headache at debugging.
I have yet not trying to use assembly policy things though. However it will still a bothersome to create a binding redirect policy for every assembly increment though.

Copying dll at Application_Start global.asax

Well, I found this solution to be interesting if it can be applied, because of these benefits:
  • The assembly does not need to be signed.
  • The directory can be specified.
  • The logic can be tracked by code.
  • It can be referenced by razor view engine.
  • The version does not need to be specified.
Then what is the cons?
  • Huge performance impact.
  • I still cannot get the correct logic to validate the flag to copy or not copy the dll, forcing the app to be compiled every request and rendering inproc session useless.
  • Not consistent. Assembly update trigger cannot be determined.
Ehem, the unstable and huge impact in performance makes this solution not implementable. Even though it has several benefits.

Copy dll Every Publishing

I had not yet trying this approach yet. This approach will use a custom tool to deploy the assembly to any registered applications. This approach has several benefits:

  • The base repository of dll can be specified. Does not need to use system location.
  • It can be referenced by razor view engine as well as compile assembly at web config.
  • It is consistent. The assembly update is triggered and can be known.
  • The assembly version does not need to be specified.
  • Does not need to be signed.
  • The manual copy dll via ftp can also be used to hosted application.
The cons:
  • Needs extra effort to make the tools for publishing. Do it manually and you will find yourself spending too much time.
  • The deployment can break due to several reasons, such as access limitation or file is locked.
  • You need to maintain the references.
Um well, this solution need a bit effort to establish. But it is possible. Moreover, I think that manual copy dll can also be used at hosted application.

Conlcusion

Well, I cannot make any conclusion here. I still do many experiment to solve this issue though. Up until now, copy dll every deployment is the best solution so far. It has several benefit and it can leverage the benefit from existing established automatic deployment. The solution is followed by installing GAC at the second place, with custom tool to automate generate assembly policy and install it in GAC.

Software project management and team sport

In project management, you manage a team. In sport such as basketball and football, you also manage team. Both sport and project management has similarity in managing a team of players. But in which places are project management and sport has the similarity in managing the team?

What does Clean Code meant to you?

The very basic question

It is the very basic question for middle-level programmer (professional programmer with advanced skill but not yet a master). It has already been discussed maybe for decades in several discussion forums. Some of the source I had found is:
However we arrived to the basic question, what is a clean code actually? This is purely my opinion about clean code.

Is Maintainable / Clean Code is a Requirement to Your Apps?

Dirty code? That is code produced without considering the maintainability aspect. You can consider a code as dirty when there are tightly coupled, using arrays or map-based instead of data structures, or use hacks like global variables. One characteristic of dirty code, is when the application become large or complex, it is hard to extend or modify and prone to error while doing so. Is you application need the opposite (called clean code)? Not every apps need clean code, and here is why.

Floyd-Warshall in a Nutshell

This is article is intended to explain Floyd-Warshall algorithm (shortest distance finding algorithm), especially for those who new with this.

Floyd-Warshall Algorithm

As described by wikipedia, it is "a graph analysis algorithm for finding shortest paths in a weighted graph with positive or negative edge weights". Simply said, it is an algorithm which you can find the shortest path for all possible routes, given several possible routes with different "costs" between each route. As simple as it is.

Example

Say that we have 5 routes (1,2,3,4 and 5) like this:

1---2
|  / \
| /   5
|/   /
3---4

With the path cost as described like this (if you have difficulty in measure, just use meters or kilometers as substitution):
1 --> 2 = 2
1 --> 3 = 3
2 --> 5 = 1
2 --> 3 = 7
3 --> 4 = 3
4 --> 5 = 2

And we need the shortest route from 5 to 3. Using normal brain, we will use this sort of algorithm:
  1. Pick point 5
  2. Pick all possible routes, in this case:
    (1) 5-2-1-3 (the sum is 1 + 2 + 3 or 6)
    (2) 5-2-3 (the sum is 1+7 or 8)
    (3) 5-4-3 (the sum is 2 + 3 or 5)
  3. Pick the lowest cost, which is route 5-4-3, with cost of 5
That's it for the single-point destination calculation.

Using Floyd-Warshall Algorithm

Using Floyd-Warshall Algorithm, you can find all shortest path from the possible routes. There are some steps to do it.

Represent the paths into 2-dimensional arrays

The path from example above need to be represented using two dimensional arrays. For example, say that we want to map route no.2 into arrays, it will be represented like this:

{ 2, 0, 7, iNf, 1}

The first value (2), represent the route between point 2 and point 1. In array, it is represented as path[2][1]. The second value (0) it represent the route from point 2 to point 2, which is zero or no distance. The fourth one, iNf or infinite, represent the route from point 2 to point 4, in which cannot be achievable, and causing it to become infinite or not possible. In programming language, the infinite can be replaced by maximum number of int.

In short, we can represent the two dimensional array as this:

         from
       1 2 3 4 5
     ------------
   1 | 0 2 3 i i
   2 | 2 0 7 i 1
to 3 | 3 7 0 3 i
   4 | i i 3 0 2
   5 | i i i 2 0

(the i symbol represent infinite)

The pseudocode

The sinppet below is a pseudocode for Floyd-Warshall algorithm based on the case above.

for k from 1 to 5
   for i from 1 to 5
      for j from 1 to 5
         if dist[i][j] > dist[i][k] + dist[k][j] then
            dist[i][j] = dist[i][k] + dist[k][j]

This nested loop will iterate through each route / path, and compare with another path having the same point. It will see whether the other path has shorter cost than the initial, if it is shorter, then it will be swapped.

Say that now we have iteration of:
k = 3
i = 4
j = 1

Then the array can be replaced with: if dist[4][1] > dist[4][3] + dist[3][1]. Or the same as if iNf > 3 + 3. The comparison is true, meaning that the dist[4][1] will be replaced by 6, or the same as the cost of route 1-4-3. After running the logic, this is the expected result:

           from
       1  2  3  4  5
     ---------------
   1 | 0  2  3  6  8
   2 | 2  0  5  8  10
to 3 | 3  5  0  3  5
   4 | 6  8  3  0  2
   5 | 8 10  5  2  0

When comparing the result in this case, the shortest cost in route 5 to 3 is valued 5, which is fit with out first try.

Designing Systems, the Art and Pitfalls

This article mainly based from this stackoverflow question about designing system. As I have written before about learning by teaching, this is a good example that I see. Even though I had experience designing a system, but I still cannot define exact steps needed to design it. Now I have learned much and able to provide the explicit steps of designing a system, at least from my experience.

The High Level and Low Level Module

In the context of system (application) design, a high level module is an overview picture about how the system interacts with the user, and other integrated system. Since low level module is a detailed picture about how the system interacts between each other subsystems inside. That's it, a system design are divided between two modules.

High Level Module

We need to divide the design to separated modules, because it is hard to design a system without high level (overview of the system) module. High level module are more understandable by the business users. Moreover, there are many pitfalls beside system errors, such as wrong use case scenario and wrong business rule validations. Defining those pitfalls in high level module design is easier and faster. Who does not loves simplicity, faster, and easier job? That's why we should do high level module design.

Taken from my stackoverflow answer, about a standard point-of-sales system that has the following sub-modules:

  • ordering
  • commiting order
  • down payment
  • goods delivery
  • return

Here is the steps of defining high level module design:

  1. Define the standard use case between user and systems
  2. Pour the use cases to some collaborated diagram such as rich picture (or anything familiar)
  3. Define the exceptions use cases. If the exceptions can be defined easily, put it immediately to model. If not, mark the model with the case exceptions to be further discussed with business teams. Some use case exceptions can be changing committed order, changing committed order after down payment, cancelling payed order, goods out of stock, etc.
  4. Iterate the process. Usually step 3 can become step 1 (the exception can / will be another use case). For example the changing committed order can be a use case, since the change of occurring is high.
  5. When the 3rd is completed without additional use case exceptions (all use case has been handled), usually I add value-additional operations.
    Those operations can be notification (email / on-screen), historical data maintenance, reminder, error-handling, etc. Some operations can be another use case as well, so maybe you will need to iterate over to no.1.
    Some example maybe when you get error during down payment settlement, maybe you will need another use case to input the down payment data manually. Or maybe you will need to maintain reminder system in another system.
  6. Move to low level model
Well, each point can be separated as another discussion.

Low Level Module

Low level module design, on the other hand gives more detailed view in the systems and it shows how each of the subsystems work between each other. Many times, low level modules are overlooked by the management because it is far very faster to immediately begin to code than creating the low level module. Then what is the benefit of low level module design?

These are the benefits of low level module design that is often overlooked:

  1. It can act as a documentation
    Class diagram, database design, state diagram, flowchart, sequence. Everything can be taken as a technical documentation or "blueprint" of the system. Is it needed? Yes in most cases, usually in first step of debugging
  2. It catches pitfalls, errors and exceptions early
    Most of the time error and exceptions are being caught during integration testing. When during testing and find some of the error, you will review the general process of the system. At that time, it is too late because your code already been constructed with your database structures
  3. It design your code base clean
    Little hacks and tweaks are sometimes (most of the times) done to fix something during the testing time (see point 2). Having a low level module, you are forced to define some general structure of your code base, and pitfalls can be avoided early, making your code cleaner and less need to refactor
  4. It can be reviewed easily
    Discussing designs with peers using low level module design will be easier and faster, compared to reviewing code
  5. It can be used as basis of review and evaluation
    After the code has been completed, you can review the mechanism and structure with low level module design. This will help to find pitfalls or unfinished works earlier (before integrated tests)
Well, there are many benefits but often overlooked by management, because usually they only make schedules with waterfall model. That is, having the development going forward (from design, code, testing, publishing) without handling for exceptions in between (bug fix during testing, redesign during code, etc). And the benefit of low level module in a simple CRUD application seems overkill (even though nice to have) for most management, that in their consideration: "it is okay to have a buggy code published rather than having 40 hours of designing low level module.

Then how do you design low level module? Well, the answer lies in many books, such as UML guidance for OOP, etc.