The Danger of Hiring Indian FTrash

makapaaa

Alfrescian (Inf)
Asset
Joined
Jul 24, 2008
Messages
33,627
Points
0
<TABLE border=0 cellSpacing=0 cellPadding=0 width=452><TBODY><TR><TD vAlign=top width=452 colSpan=2>Published July 15, 2010
c.gif

</TD></TR><TR><TD vAlign=top width=452 colSpan=2>Bad advice from support unit caused DBS systems crash
But why back-up systems also failed is still not known

By CONRAD TAN

(SINGAPORE) Bad advice by a member of IBM's Asia-Pacific team outside Singapore for a routine repair job on DBS Group's data storage system is likely to have caused the massive systems failure that prevented the bank's customers from accessing their accounts on July 5.

An IBM repair crew in Singapore had sought advice from the regional team on how to fix a minor fault in DBS's storage system connected to its mainframe computer.
A member of the regional team recommended an outdated procedure, which then triggered the systems crash that disrupted DBS's banking and ATM services for more than seven hours.
BT understands that the procedure had previously been used without incident, but had been updated before July 5.
Such repair procedures are regularly updated, and it isn't clear whether IBM knew that the old procedure could have caused the type of system-wide crash that shut down DBS's cash machines, online banking and credit card services on Monday last week.
And while botching the repair may have caused a system-wide malfunction, it's still not known why the bank's back-up systems also failed.
Both IBM and DBS have declined further comment since their public statements on Tuesday. A full-scale investigation into the systems crash is still under way.
The IBM Asia-Pacific team is the central support unit for all IBM storage systems in the region.
BT understands that it is common for IBM support staff in different countries to rely on the regional team for advice when maintaining the complex data-processing and data-storage systems that IBM supplies to clients such as DBS.
On Tuesday, DBS chief executive Piyush Gupta apologised to DBS and POSB customers in a letter posted on the bank's website, and said that a 'procedural error' by IBM during a routine repair job on a component within the bank's data storage system connected to its mainframe computer had triggered the crash.
In a separate statement, IBM said that 'a failure to apply the correct procedure' to fix a simple problem in the data storage system it maintains for DBS had crashed most of the bank's systems.
The outage has dealt DBS's reputation a blow, just when it seemed that the group was making good progress in improving its consumer banking business here, which has lost market share to rivals over the past decade.
Mr Gupta, who joined the bank last November, and has worked to cut waiting times at the bank's branches, ATMs and call centres here to improve its customer service in what is still by far DBS's biggest market, said that DBS accepts full responsibility for the service disruption on July 5.
The Monetary Authority of Singapore said on Tuesday that it was 'seriously concerned' by the failure of DBS's banking services that day and had asked the bank to give a full account of the incident to the public.
<SCRIPT language=javascript> <!-- // Check for Mac. var strAgent; var blnMac; strAgent = navigator.userAgent; strAgent.indexOf('Mac') > 0 ? blnMac = true:blnMac = false; if (blnMac == true) { document.write('
'); } //--> </SCRIPT>
</TD></TR></TBODY></TABLE>
 
An IBM repair crew in Singapore had sought advice from the regional team on how to fix a minor fault in DBS's storage system connected to its mainframe computer.
=> Minor fault also need to seek outside help? It sure looks like DBAss is getting monkeys or baboons and it remains to be seen if it is paying peanuts!
 
And while botching the repair may have caused a system-wide malfunction, it's still not known why the bank's back-up systems also failed.

=> It sure looks like the FAT CATs in DBAss just outsourced the entire thing away without even bothering to do any proper audit. For all we know, they so-called backup system may not even exist while those FAT CATS continue to pay for it!
 
And while botching the repair may have caused a system-wide malfunction, it's still not known why the bank's back-up systems also failed.

=> It sure looks like the FAT CATs in DBAss just outsourced the entire thing away without even bothering to do any proper audit. For all we know, they so-called backup system may not even exist while those FAT CATS continue to pay for it!

Backup does exist, but wether they did test the backup, or just lazily backup, backup & backup....later go for masala tea with nan or prata...everybody just sign off QED...

:rolleyes:
 
This place is getting more and more fucked-up as more things are being outsourced or placed in the hands of foreigners who do not give a shit whether errors or mistakes are made.. This is not their home...
 
In 90s when stuff were out-sourced no one complained .

Then it haven begin to bite them yet .
 
In 90s when stuff were out-sourced no one complained .

Then it haven begin to bite them yet .

The outsourced jobs were done by Local Talents...then business took the cue from Lim Sway Sway...they hired FT to replace LT..for they were betterest, fasterest & cheaperest..they could see into the crystal ball...:D
 
I thought with today's (maybe since 5 years ago) technology, like RAID and hot-swappable, you can just take out a storage device and put a new one in without switching off the system or affecting the integrity of the data. That means any lay person can do the job.

Can any IT expert confirm?
 
Back
Top