Custom events in JavaScript

In one of our web applications there is a need to implement notifications when ever synchronisation of data from out vessels completed. Some dashboard graphs and grids this information to reload the data to show the new details.

To implement this, I quickly made a WCF service method which checks the synchronizer status records to determine the current status. Now the interesting part, I want this status change should invoke some web events so that any client page just subscribe to those events to do what ever they want. To create custom events I used jQuery and following is the code;

var beforeSourceSyncCheck = $.Event('beforeSourceSyncCheck');
$(window).trigger(beforeSourceSyncCheck);

The above code creates a new custom event named “beforeSourceSyncCheck” and triggers it immediately. This code is placed under master page so that these events are available any page which has this master page. To consume it follow the below code;

$(window).bind('beforeSourceSyncCheck ', function (e) {
            MASTER_ShowNotification('<b>DbSync: </b> Vessel data synchronization completed.', 'info');
        });

This way I am able to create as many as event handlers I want to meet my requirements.

How did I changed ProjectDocs

This is my first post as part of “Experience” series. In these posts I’ll discuss on my previous projects about why I worked on it, how I made them, what mistakes I made and my learning while working on them. Hope you will enjoy these boring stuff.

“ProjectDocs” is my first application that i worked on in Omni Offshore. It is 2010, July 1st, my first day in Omni Offshore. I completed the formal introduction and received my ID card and regular stationary like pencil, notebooks and etc. and my manager shown my work space. I was relaxed and had a brief chat with my co-workers and learned about their hobbies and interests. Then comes my manager with few documents and introduced me yo my first project to maintain and it is “ProjectDocs”

I quickly gone through the BRD (Business Requirements Document), a kind of SRS. Let me explain briefly about the project. It is ASP.NET application but depends on Documentum from EMC. Basically this system allows users from shipyard to review drawings from engineering team and make comments on them if they are not clear. Once all clear they use them to build ships. By the way, our company builds big ships for oil and gas companies. We call them FSOs (Floating Storage and Off-take) and FPSOs (Floating Production Storage and Offloading).

So this gave me the opportunity to learn more about Documentum, DQL, DFC and DFS. The system is already in production and I received a list of issues and enhancement requests from business users. I went through the code and started to understand what all about Documentum. I sat with our Documentum resource to learn more about its structure and its query language (DQL). Made few POCs to understand the interfacing part between ASP.NET and Documentum using DFC (Documentum Foundation Classes). Within a week time I was ready to take on those issues and enhancements.

Documentum is a self contained JAVA based system. It hold metadata as well as content of the documents. It supports organising the documents by doctypes and each doctype is defined by its attributes. Consider each doctype is a table in DBMS and attributes as columns for the table. Documentum exposes an API by means of DFC or DFS. DFC is a COM component  that should be installed on each client machine. DFS are services exposed from Documentum server. We also can use DQL, a query language to retrieve or update the data from the Documentum system. Initially our ASP.NET application communicates Documentum through DFC.

When I was gone through the code I found that our ASP.NET application is nothing but a wrapper around Documentum functionality and there are many pull and push requests between Documentum and the application. Also I found that the system is very tightly coupled with shipyard name in such a way that they have to create virtually every thing from Documentum structure to ASP.NET application if there is a new shipyard to deal with. In fact that’s what happened, they have 4 such applications with different code base to deal with different shipyards. That means any change have to be done manually on those code bases and deploy separately to the production server.

I concluded that, with the current setup its not possible to make the changes. Also since there are many pull and push requests between Documentum and our application the overall responsiveness is worst and it is getting only to much worse. So I talked to our manager and said we needs to seriously think about this and asked to have a meeting with all the stakeholders.

There are 10 participants in the meeting room and one of them is our GM. That’s hardly one week of me taking the charge and my GM is interested in my opinion. That made me aware of the importance of the system as well as the risk in my recommendations. But I did proceed to the meeting room any my hands and shaking a bit as I am thinking about what to say. Everybody looked at my manager who started the meeting and listed all issues with the current setup. Now all looking at me for my opinion and i said “scrap current setup and build a better system from ground up”. My GM seems not convinced but agreed anyway after couple of discussions.

Now the interesting part, design the new system. I still took Documentum into consideration as that is our central repository of documents and many workflows running on those documents. But this system is internal and becuase of that we need ASP.NET application to interface between public user (shipyard user/Engineering team) and Documentum. The change I made was, keep Documentum only to store documents and moved system data to Oracle. Because of this, the whole business process like commenting and approvals happens on ASP.NET system and final documents will go to Documentum. Due to this, unless I need the document the system does not talk to Documentum where as previously the whole business process is on Documentum.

This change brought flexibility in maintaining the system as you can create as many shipyards you want and assign users to represent them. Due to this we don’t need to touch Documentum every time we have a new shipyard to introduce. As a byproduct to this whole exercise I have a robust understanding over Documentum, DQL, DFS and DFC. This way I am able to bring flexibility and speed into the system which helped the end users. It also helped our IT department in terms of maintenance and reduced the dependency on Documentum resource. And finally reduced cost of ownership to the company as system become very popular so that all our shipyards started using them.

Windows: “The network folder specified is currently mapped using a different user name and password”

When I try to map a network folder as a network drive in 2008 Server, I was greeted with  following

“The network folder specified is currently mapped using a different user name and password”

The issue is due to me already opened the network path in windows explorer with a different credentials. Due to this there is a clash when I try to map the same network path as network drive, because I entered a different credentials to connect.

To fix this issue you need to manually delete the existing connection. Even you close the windows explorer which opened the network path does not help. To close the connection, open command prompt and type following command;

net use /delete \\server\share

This closes the existing connection so that you can create the network drive successfully.

ORACLE: How to escape special characters

We often need to escape special characters while retrieving the rows. To do that there are two ways;

  1. Use “SET ESCAPE ‘{ESC CHAR}’
  2. End the select statement with ‘ESCAPE ‘{ESC CHAR}’

For example;

SELECT SL.SYSTEM_CODE, SL.SYSTEM_DESCRIPTION, SL.SYSTEM_PARTICULARS FROM SYSTEMS_LIBRARY SL WHERE SL.SYSTEM_CODE LIKE 'UOTE\_%' ESCAPE '\'

the above statement returns all the systems with system code like ‘UOTE_’. But i cannot use the underscore directly in the quotes. Therefore I want to escape it with \ and tell Oracle to treat it as escape character by typing “ESCAPE ‘\'” at the end of the select statement.

APPCMD: IIS Over Console

When we do remote deployment what more convenient than a console? Once I got an opportunity to remotely deploy the web application along with supporting subsystems. Therefore I need a batch file to automate the whole deployment procedure.

One of the requirement is to setup IIS with new app pool, create VD and deploy the files needed to run the website. Following is the command to manage IIS over command prompt;

APPCMD
  1. To list all existing app pool from IIS use
    APPCMD LIST APPPOOL
  2. To create a new app pool then use
    APPCMD ADD APPPOOL /NAME:<NAME_OF_APPPOOL>
  3. To create app pool with integrated pipeline use
    APPCMD ADD APPPOOL /NAME:<NAME_OF_APPPOOL> /MANAGEDPIPELINEMODE:”Integrated”
  4. To create app pool with specific .NET version, use
    APPCMD ADD APPPOOL /NAME:<NAME_OF_APPPOOL> /MANAGEDRUNTIMEVERSION:”v4.0″.
  5. In this case I am using .NET Framework 4.0
  6. To enable 32-bit dlls use the flag
    /ENABLE32BITAPPONWIN64:”true”
  7. To finally create the site,
    APPCMD ADD APP /SITE.NAME:”Default Web Site” /PATH:/<VD_NAME> /PHYSICALPATH:<PHYSICAL_PATH_TO_VD> /APPLICATIONPOOL:<APP_POOL_NAME>

 

 

ORACLE: View compiled with errors

Many times when ever we try to create the views under Oracle, often end up with messages like “View created with warnings” or ” View compiled with errors”. But SQL Developer does not reveal the error. In such cases to know the error actually generated fire the following query in SQL Developed or SQL Plus prompt;

SELECT TEXT FROM DBA_ERRORS WHERE NAME='VIEW_NAME_HERE'

If you have same view name in more than one schema then put that target schema name as

AND OWNER='SCHEMA_NAME'

This table always contains most recent error raised on that object.

Enterprise IT Landscape

For last 6 years I am working as a solution architect for an offshore company. As an architect I am responsible to design the application landscape of the company to proactively fulfil the computing needs of its employees.

When I first joined the company, I was assigned to look into an existing system to add new features. It is based on ASP.NET but heavily depended on documentum for all it’s data needs. I was horrified by the idea of using documentum as database and expecting fast responding system. I explained same to my manager and called for a meeting. I was bit nervous about what I am going to say. Our director asked for the recommendation and I said, “Scrap the existing one and build a new one with better architecture”. Seems, he was bit disappointed but after I presented my case he finally gave his go ahead.

Later I took steps to abolish few independent systems and integrated their data sources to build portals for each department. I even provided API services for any requirement where one portal needs data from another.

All this experience took me to the idea of marking the application landscape of an enterprise. With my little experience and understanding I want to classify application landscape of an enterprise into following;

Computer Generation:

In this generation of application, the main idea is to just capture the data and present the data back to user. System does not do any transformation of data but heavily depend on the end-user skills to transform the data into meaningful insights.

User keeps on exporting data into Excel sheets to work on it online. Enterprise of this generation will have lots of independent application with different characteristics like programming language used, API versions, database structure or event databases used (trust me, many applications use SQL Express of Access DB) and etc. It’s a chaos and any one who want to do some thing about it shall face an overwhelming changes to make. Usually we end up by saying, it’s fine as long as it does not break.

This hinders the progress and imagination of what can be achieved with all the data being captured. Then we actively look for alternative off-the-shelf solution to replace all this mess.

Computing Generation:

This is slightly better than previous. In this case, users are more interested in searching and retrieval of data. Transformation of data will be more common across the applications. More emphasis will be put on how easily user can retrieve data.

It stops there, still we have plethora of applications to maintain which are living in their own silos. These will be more friendly to end users than the developers. Still users are looking for off-the-shelf product to replace them.

Analytical Generation:

This is the start of the idea to integrate the applications. People are thinking about portals, common data sources, common look & feel and navigation across the portals. Independent application either abolished to have more integration or integrated into portals.

It’s new life to both users and dev’s. For user its same look and feel, navigation across the site. Data is more easily accessible and system provides insights on the problem by bringing user attention to various data points. For the first time user spends more time on working towards solving a problem than data.

Applications in this generation provide information in terms of graphs, charts than endless rows of data. Users talk in terms of KPIs than going through excel sheets.

Intelligence Generation:

Systems in this age are well automated. They proactively works on models and provide feeds to users. Users spend their time to build future based on feeds they get from system. Applications of this age influence the decisions and supports the decisions.

These systems constantly monitor and analyse the data in the background. They will have access to various data sources to pull the data and run AI algorithms to prepare important feeds to the users. They shows recommendations to the end users. They proactively empower the users with right data to solve a problem.

As an architect, my endeavour is to bring applications into at least analytical generation if not intelligence generation.

Power Shell: Remote session

This is a very common activity for any system administrator to remotely connect a computer and perform the administrative tasks. PowerShell offers an excellent opportunity to do exactly the same.

To remotely connect to any computer use the below command in the PowerShell

Enter-PSSession -ComputerName XXXX -Credential YYYY

Here XXXX is the computer name or IP of the remote computer and YYYY is the user name with which you want to login. This delivers the remote PS session to the remote computer if the given computer is in TrustedHosts list of WINRM. If not Powershell throws an exception as below;

image

In that case, we need to add the remote computer to the trusted hosts of WINRM. To do that, issue the below command;

winrm s winrm/config/client '@{TrustedHosts=&quot;RemoteComputer&quot;}'

Here the “RemoteComputer” is the name or IP of the computer to which you want to connect.

image

Now execute the Enter-PSSession command to invoke a remote PS session to the remote computer.

image

Now you have the remote command shell (Power Shell)

My Computing Platform

1999 was the year when I bought my first computer. With Intel 533 MHz processor and 512 Mb RAM, its my best friend. But thats the last time I purchased a branded computer. Since then I started to build my own PC from the components that I choose. It’s an awesome experiance when you build your tool of computing. I am sure many of you build your very own PC.

I personally believe, if you build your own PC then you will feel a bond with it. My current build has completed it’s 4th annyversary already. The following is the list;

  1. ASUS Crosshair V Mother Board
  2. AMD FX-8150 8-Core Processor 3.61GHz
  3. 16 Gig 1600MHz RAM
  4. 512 GB SSD, 1TB WD HDD with 7200 RPM and 128 GB WD HDD with 10,000 RPM
  5. AMD Radeon HD 6900 GPU
  6. CM Strom truper Casing
  7. Dell U2515H Monitor wiht IPS

Apart from my desktop, I also have a laptop and it is Alienware M15x. I bought it in 2010 before my desktop. I love gamming, so that was the plan. But unfortunatly my gamming enthusiasm does not last longer with M15x. I was playing Crysis and with even low settings my M15x did not impressed me. So finally I built my own desktop.

Resently I changed the laptop’s hard drive to 512 Gb SSD. Since then it’s really faster and I am using it as my development machine when ever my desktop occupied by my son.