Image 01 Image 03

IT experts skeptical Obamacare website has just “traffic” problem

IT experts skeptical Obamacare website has just “traffic” problem

The other day, I wrote that long wait times are not the only issue with the Obamacare website, highlighting my skepticism of the administration’s claims that high volume was to blame for all the glitches in the system.

It turns out that Reuters news agency spoke with five technology experts who expressed similar skepticism and question the architecture of the Obamacare website.

The U.S. Department of Health and Human Services, which oversaw development of the [Obamacare] site, declined to make any of its IT experts available for interviews. CGI Group Inc, the Canadian contractor that built HealthCare.gov, is “declining to comment at this time,” said spokeswoman Linda Odorisio.

Five outside technology experts interviewed by Reuters, however, say they believe flaws in system architecture, not traffic alone, contributed to the problems.

For instance, when a user tries to create an account on HealthCare.gov, which serves insurance exchanges in 36 states, it prompts the computer to load an unusually large amount of files and software, overwhelming the browser, experts said.

If they are right, then just bringing more servers online, as officials say they are doing, will not fix the site.

“Adding capacity sounds great until you realize that if you didn’t design it right that won’t help,” said Bill Curtis, chief scientist at CAST, a software quality analysis firm, and director of the Consortium for IT Software Quality. “The architecture of the software may limit how much you can add on to it. I suspect they’ll have to reconfigure a lot of it.”

Many users also struggled with a “glitch” in which they were presented with empty drop-down lists for security hint selections.  I experienced the issue myself on the first ten or so attempts at creating a login account.  One technical expert who spoke to Reuters also doubted that issue was caused by traffic alone.

Many users experienced problems involving security questions they had to answer in order to create an account on HealthCare.gov. No questions appeared in the boxes, or an error message said they were using the same answers for different questions when they were not.

The government official blamed the glitch on massive traffic, but outside experts said it likely reflected programming choices as well.

“It’s a bug in the system, a coding problem,” said Jyoti Bansal, chief executive of AppDynamics, a San Francisco-based company that builds products that monitor websites and identify problems.

Other programmers also shared the same assessment on that issue, as was explained in a Bloomberg article to which I linked in my post the other day.

Another point I mentioned in that post was that of load and stress testing – the claim that the demand on the system was unexpected didn’t seem to me an acceptable excuse, as there are standards that developers follow in determining how to test for load on the system.  One of the technical experts who spoke to Reuters also spoke with the Washington Post.  He elaborated on the standard approach that developers follow in capacity planning and testing, when asked whether overwhelming traffic seemed a good explanation from the Obama administration for all of the problems.

That seems like not a very good excuse to me. In sites like these there’s a very standard approach to capacity planning. You start with some basic math. Like, in this case, you look at all the federal states and how many uninsured people they have. Out of those you think, maybe 10 percent would log in in the first day. But you model for the worst case, and that’s how you come up with your peak of how many people could try to do the same thing at the same time.

Before you launch you run a lot of load testing with twice the load of the peak , so you can go through and remove glitches. I’m a very very big supporter of the health-care act, but I don’t buy the argument that the load was too unexpected.

While healthcare.gov was scheduled to be down this weekend for maintenance to try and correct some of the rollout glitches, it remains to be seen whether programmers are addressing more than just capacity issues.

Meanwhile, Obama continued to claim this weekend that the demand was beyond what they expected and once again focused only on the wait times.  Further, he couldn’t provide any numbers in relation to how many have actually been able to successfully sign up for health insurance.

From Politico:

President Obama doesn’t know how many people have signed up for health care insurance since the marketplaces created by his health care law opened on Oct. 1, he said in an interview released Saturday.

Obama’s comments came in a sit-down with the Associated Press, as he encouraged people trying to register to keep trying. Those who want insurance “definitely shouldn’t give up,” he said. There’s been skepticism from Republicans and the press about whether the White House really doesn’t know how many people have enrolled this week.

Obama said that early interest in the exchanges far exceeded the government’s expectations for the first few days, but “folks are working around the clock and have been systematically reducing the wait times.” HealthCare.gov is down for the weekend to give the administration uninterrupted time to improve the site.

Most people are willing to wait through glitches in a system when it’s a product or service they really want.  But in the instance of health insurance, you’re talking about a product that people are now mandated by the government to purchase.  Add to that, some who have been able to make it through the process are finding that the Affordable Care Act rates aren’t so affordable after all.

Building the public’s confidence is key in this case, and while there’s still time for issues to be fixed, confidence won’t come even from supporters unless you’re up front about the issues.

Set aside the “Republican critics” most media would like to dismiss; people of all political stripes, including some who actually supported the Affordable Care Act, are expressing disappointment and skepticism after the Obamacare website rollout.

Unfortunately, the shutdown has all but consumed the news cycle since the rollout of Healthcare.gov.  While some media outlets have focused on the long wait times, very few are actually breaking down the glitches and testing the administration’s claims that volume is solely to blame.

By the way, the account I created to try and just look at the system is still stuck in Zombieland.

DONATE

Donations tax deductible
to the full extent allowed by law.

Tags:

Comments

CGI Group Inc, the Canadian contractor that built HealthCare.gov, is “declining to comment at this time,”

They gave the contract to a FOREIGN company? One from a country with a mere 35 million people? They couldn’t find an American company? Maybe if they’d used one of the Silicon Vally companies with the Indian programmers they would have had a site that could handle 100 million users.
BTW the US Census web site is shut down. I doubt that it will be missed.

    GrumpyOne in reply to genes. | October 6, 2013 at 8:52 pm

    This was task that should have been contracted out to Oracle or some other large scale producer. Even then some glitches might have sprung up but it would not have been a show stopping disaster that the current s/w is.

    And to take this a step further, not much of any government s/w is easy to use, value effective or noteworthy in other ways.

    But this administration has established a new low for product performance…

    DINORightMarie in reply to genes. | October 6, 2013 at 9:09 pm

    That is the first thing I noticed. There are HUNDREDS of American companies that could have done the work, and done it well within budget and ON TIME, with MINIMAL “glitches”.

    They do it EVERY DAY, and do it without all the errors.

    Just like McAuliffe in Va. (the Democrat Clinton crony who is running for governor), JOBS are sent out of the country?!?! So much for American “jobs” being created!!

    Beyond words. I am seeing RED……pun intended.

    Juba Doobai! in reply to genes. | October 6, 2013 at 9:21 pm

    The Canadians did the job that Americans wouldn’t do.

    Aridog in reply to genes. | October 6, 2013 at 9:48 pm

    The US Government contracts most of its “IT” work out to contractors, usually to the lowest bidder. It is required to do so, even in the military, due to the “commercial activities” stipulations in OMB Circular A-76 that end up making all IT work but inherent oversight “non-governmental work.”

    Vendors selected by bidding, including the government agency’s bid for the same work, which if cheaper might maybe let the the agency keep the work in house where the experience resides … however, that seldom happens, if ever.

      Bruce Hayden in reply to Aridog. | October 7, 2013 at 11:08 am

      Problem here is that this is fine in theory,, but doesn’t always work in reality. The requirements can be, and often seem to be, skewed towards one vendor or another. 30 years ago, I worked for a competitor of IBM, and we spent a lot of our time fighting unneeded requirements that essentially sole sourced the bid to them, and they were, inevitably, the high cost provider, except when they were using a loss-leader to take business away from a competitor. (And, yes, they worked equally hard to show that requirements that would sole source us were unreasonable or unneeded).

      I then left that company and worked my way through law school as a consultant to an 8A minority contractor, who hired based on whom he was told to hire by his government liaison. I may have gotten the job only because the liason had wanted to hire my wife for years, and we were a package deal).. Made a really good living then, managing to come out of LS debt free (and she got another Masters degree at the same time).

      Point is that the system looks good on paper, and sometimes works in reality. But, there is always some room to fudge in favor of desired bidders, and I suspect that this Administration has stretched that further than their predecessors were willing to do.

    Sozo in reply to genes. | October 6, 2013 at 10:31 pm

    CGI is a BIG IT services provider here in the US. They have offices all over VA (my own state) and numerous government contracts. Sure there are lots of other US companies that provide the same type of services, but few that can provide support for a job of this scale. Also having had experience with certain US based companies in the same business (here’s looking at you Northrop Grumman) I can say CGI has a better reputation that some of its American owned counterparts.

    I’m not defending the govt here, just putting that particular issue in an IT perspective. I almost wish NG had done the Obamacare IT work, just so I could watch it go up in flames.

      JayDick in reply to Sozo. | October 7, 2013 at 8:25 am

      I don’t know if you have been involved in government IT contracting, but it sounds like you might have been.

      I have and I can see why so many of them, including this one, end up as debacles. First, most of the federal employees involved in the contracting have little or no expertise in the services being acquired. So they set up “requirements” that don’t fit the work very well. These requirements drive the abilities that bidders are supposed to have. When it comes to evaluating the bidders, again you have government employees who really don’t have the skills or knowledge needed to do the evaluation. And, of course, the bids (proposals in most cases) have lots of spin in them, making them even harder to evaluate.

      Then, if the criteria for evaluating bidders were not set up correctly, which they usually are not, the winning bidder is often not really qualified for the work being contracted

      The management of the contract after it is awarded is also usually very weak because the federal employees doing the managing don’t adequately understand the technology they are managing. And the contractors know they can get away with less than stellar performance. So, they hire the cheapest people they can find who they can convince the feds are qualified.

      In other words, the whole thing is pretty hopeless.

      Paul in reply to Sozo. | October 7, 2013 at 11:53 pm

      can you name one large-scale (hundreds of millions of users) web app that CGI has built?

    janitor in reply to genes. | October 6, 2013 at 11:13 pm

    Foreign. It isn’t akin to outsourced work by a domestic financial institution, which has limited information and is in privity with its customers and directly liable for its failures, unlike the federal goverment.

    Foreign employees of foreign companies contracted by the federal goverment would have access to the informational data bases as a practical side effect of maintaining the software. These individuals, godknowswho they are or might be from time to time, aren’t subject to our laws or the jurisdiction of our courts. Isn’t that sweet.

If there were lots of happy, newly-insured people, and Obamacare exchanges were working well, we would be seeing the snarky “I told you so’s” in the media, lots of PR and photo-ops, be cheers and thrilled stories in the media.

legacyrepublican | October 6, 2013 at 7:40 pm

And these people are supposed to keep all my medical records secret!?

I am sorry. I work in IT. I know the security necessary and at best, these people don’t have a clue what they are doing and at worst, which I suspect, don’t care!

Politics trumps my health care. As long as they get re-elected, that is all that matters.

    No, they’re not supposed to keep your medical records secret. Apparently (as I heard from a reliable source) they tell you to be aware that lots of people will be able to see the info you’re required to input — in order to beg the government (i.e. other taxpayers) for a subsidy for something that used to be reasonably affordable for most of us until the Democrats made it unaffordable.

    And that angers me hugely, as someone who makes very minimal demands on the medical system. Do I get any breaks for not straining the medical system? No way!

      Bruce Hayden in reply to Radegunda. | October 7, 2013 at 11:14 am

      I do find this an interesting concept. Your medical records are supposedly secret unless you willingly give up the right to privacy (and this even works to keep people from helping their spouses, without explicit, typically written, permission). And, (ignoring the egregious actions by the IRS under Obama in targeting their opponents) it is supposed to work the same with the IRS. But, the cost of health insurance is massively raised by ObamaCare, and then people are offered subsidies bringing the costs back to where they would have been otherwise by, among other things, waiving these confidentiality rights to health and IRS information. Pretty good scam.

Just like the debt/deficit deficit problem is because taxes aren’t higher.

As some one with over 18 years experience in functional and performance QA testing, I have to agree this system wasn’t properly stress tested. A well designed stress and load tests will reveal memory problems and excessive cpu usage. These problems can be corrected both by revising the code and expanding the server farm.

Certainly it wasn’t sufficiently functionally tested, either. With the kinds of errors described, the test plans (if they even had any) themselves were not properly reviewed, implemented or followed.

Then again, this was most likely a three year project squashed into a two year time frame and lots of stuff got missed. If a shop doesn’t have a committed performance testing mentality then when the project is nearing the end it is too little too late.

    Bruce Hayden in reply to RightSide. | October 7, 2013 at 11:45 am

    But as pointed out by the recent WaPo article, scaling of the hardware is the easy thing. Just cart more hardware in,, plug it in, and fire it up. Problems here seem more fundamental. And, sometimes, what is required is a basic design rewrite. Have run into them for the last almost 40 years (though much less so since I moved from software engineering to patent law some 20 years ago). Maybe not as much as you, but I did spend a number of years working on scalability and load issues – in my case primarily involving data communications software – at a time when the standard was several lines, and we would support dozens of simultaneous bulk transfers. Mostly, my job was to determine why the software worked on small numbers of lines, but failed when scaled up. Not always easy, because they usually were fairly subtle, involving inadvertent timing windows (and, rarely the type of mis-design mentioned below)

    One example I gave there was a system to interact with other systems (back before this was common). Worked with one other system, but started to fail with 2, and fell apart with more. Why? They used waits in the code, waiting for inputs from the other systems. Implicitly single thread. Ultimately discarded, and ported to my design that used queued messages (which meant that it had a single wait point, etc)

    Now, one recent example that was fixable involved saving a number of text locations for later use. It worked great on, say, 100 numbers with a simple search of a list of such, but failed miserably when I inadvertently threw 20,000 numbers at it. The original solution was essentially N^2 (or N*N) – it was essentially exponential. My solution was keeping the list sorted, and accessing the list with a binary search, which made it closer to Log2 N than N^2. Still took more redesign that I would have liked.

    The reason that this might be a much worse problem than can be handled by merely carting in more hardware, hooking it up, and powering it up. is that some of the problems seem to be a result of all the data from so many different sources that needs to be assembled. And, these other sources are inevitably remote – mostly, I suspect, from other agencies, or at least computer centers. Some of this may be alleviated with infrastructure upgrades all up and down the line. But, some of it may be more intractable, and require a major redesign. What probably needs to be addressed is why there are so many accesses to other sites, and if there is a way to significantly reduce such. We shall see.

Why anyone would trust these people with their personal medical and financial information is a great mystery to me.

JimMtnViewCaUSA | October 6, 2013 at 8:50 pm

These guys seem to have a problem with web sites?
The Amber Alert system to protect children has been allowed to go down.
Check it out:
amberalert.gov

Unfortunately, the shutdown has all but consumed the news cycle since the rollout of Healthcare.gov. While some media outlets have focused on the long wait times, very few are actually breaking down the glitches and testing the administration’s claims that volume is solely to blame.

Even if there had been no shutdown, the media narrative would still be favorable. Or at least very uncurious and accepting of the official government narrative.

We now live in a country where there is no longer any real distinction between the government and the media.

    Bruce Hayden in reply to Recovering Lutheran. | October 7, 2013 at 12:04 pm

    As pointed out above, the fact that volume was the root cause of the problem does not mean that the basic design was good. There are plenty of software programs, etc. that run just fine at low volume, but have significant design defects that prevent them from being scaled,, or to be scaled with an acceptable level of resources. For example, some types of algorithms are rated on their complexity. If N is the number of items to process, the ultimate would be <= N (you can sometimes go below N with SIMD (i.e. vector) processing). Usually tolerable is Log2 N (but not really for really high volume applications such as Google, etc). But, sometimes you run into exponential solutions (such as N^2 or N*N mentioned above). And, note that while CS students are taught techniques that can be utilized for N and Log2 N designs, most of their projects utilize N^2, etc designs.

    What we do know so far is that they grossly underestimated the volume that they would receive, and grossly undertested their systems at that volume. They apparently based their volume estimates on the testing and rollout of Medicare Part D – but that was limited to seniors looking for a drug discount, and even then, they tested at maybe 6x the expected volume. Here, they were apparently experiencing several times the maximum volume that the system had been tested at (which should have been many times that rationally expected). It is the sort of incompetence and malpractice where heads should roll – except that this is the government, where only low level functionaries get fired, and Lois Lerner got her pension, after a six month paid vacation. And, where vendors, once they have the contract, can often hang on, getting paid many times their original contract price, to try to fix their systems, as long as they grease the right palms. (The history of government contracting is replete with examples of failed government contracts that were many times over budget before being finally abandoned).

This is a problem that has shown up over and over on government websites for almost 20 years. The in house IT people are incompetent, and don’t know what they are doing. It should have been outsourced from one end to the other to multiple tech companies, to make sure every piece worked. I hate to think of the number of so-called IT specialists the government has on this, and the amount that it is costing us, the taxpayers,

    Aridog in reply to bawatkins. | October 7, 2013 at 10:11 am

    I can tell you how many IT specialists the government has one this…almost none. Read my comment about OMB Circular A-76 and “Commercial Activities” earlier on this thread. IT work per se is defined as “non-governmental” therefore must be contracted out with minimal oversight.

Well, come on! We know what sort of IT geniuses the Obama administration hires! I seem to recall an “authentic” birth certificate that was posted while it was still in working layers!

There is no denying the shutdown has sucked all the media oxygen for politics, which has helped Obama immensely after his humiliation with Pution over Syrian and bumbling kow-towing to the mad mullahs of Iran, not to mention the utter incompetence of the ACA rollout.

The problem isn’t that it is a Canadian company. The problem is the contract was probably let to someone’s crony without thorough vetting or bidding, or else the process was supervised by someone with no idea what they are doing, which is pretty much the standard in this Administration where no one is allowed to be smarter or prettier than President Princess Peacock.

I bet it was a cost-plus deal, which just begs the contractor to drag it out and run up the costs.

Went to Healthcare.gov with Firefox this evening. Among the installed addons, is Abine DoNotTrackMe which reports the main page at Healthcare.gov links to 5 tracking companies: Optimizely, CrazyEgg, Doubleclick, Google Analytics, ChartBeat. No idea what effect these have on the site’s performance – just thought it curious having all that on a dot GOV site. By comparison, Whitehouse.gov, USA.gov have just 2 and 3 tracking companies, respectively. LegalInsurrection has 9 on this page.

It does make me wonder which browsers healthcare.gov is designed for and whether or not AddBlockPlus, DNTMe, etc., will break the web site?

After waiting 20 minutes to get to the account signup page, I gave up. On a whim, I went to eSurance.com, answered 4 questions (Birthdate, gender, zip code, smoker) and had insurance quotes in under a minute for 5 different plans beginning on 1/1/2014. The 2 bronze PPO plans from BCBS are about $30/mo. less than what I’m paying for the IL CHIP insurance.

One interesting observation about the eSurance.com site; there are some 81 insurance policies starting at $146/mo. available from 5 different companies for this locale were I to need coverage NOW. After 1/1/14, there are just 5 plans (2 bronze, 2 silver, 1 gold, no platinum), starting at $432/mo. from BCBS of IL, who is NOT among the 5 companies I can buy insurance from NOW.

Single source price fixing. Yup. That’s SO MUCH BETTER than what we have now. 🙄

TrooperJohnSmith | October 7, 2013 at 2:13 am

Calling bullsh!t on ANYTHING that comes from the Obama Administration is a safe bet, right on par with calling ‘heads’ using a two-headed coin.

These people are even starting to give Chicago Corruption a bad name.

[…] about us up to and including our bra size. As for making the website safer, well……. https://legalinsurrection.com/2013/10/it-experts-skeptical-obamacare-website-has-just-traffic-problem…  If the feds are having this much trouble, I’m not that confident about the state. I heard […]

All these analysts are talking about “the uninsured.” But by far the larges group entering Obamacare are the folks whose individual policies will have to comply in January. We just got our notices that we have to change our plan.

There are 25 million people who are losing their current plan due to Obamacare. Bet you this affects the web traffic.

Prediction: eSurance will get most of the traffic.

    I agree. We are also looking at renewing our individual policy and tried to get information through BCBS the week before Oct 1 without luck. A news story in April of this year stated that a Kaiser poll found that 40% of Americans did not realize obamacare was still a law. I believe that because most Americans cannot name their representative or senator.

They did try to contract Obamacare out to the bigs and were told “No thank you.” They saw it for the impossible mess it is from about two years ago. They also knew they would be blamed for it. A lot of the smaller companies were approached to form a “consortium” to take it on. They looked at it and said “No”. Barry the Blamer was a known quantity even back then. It is conservatively estimated that it would take five YEARS to set it up and an unknown amount of money and several geniuses to design it. It was never going to happen. Remember that the entire cluster=(@$% originated from one of Obama’s throwaway lines because Hillary Clinton was beating him with a pickax at a symposium on healthcare. Now don’t we all feel better.

    Aridog in reply to Elliott. | October 7, 2013 at 10:45 am

    Actually, how many people in the USA are aware that government has had a health care plan working for over 50 years now? It is portable and no prior condition exclusions exist in it. It is managed by a single agency and covers all 50 states, with about a dozen alternative providers and plans in each location. The contracting process is already established, so there would be no need to create another one of many layers…at most hiring mid-level civil servants to handle increased volume if expanded to cover everyone.

    Only issues to settle on would be, first, do we want to subsidize the indigent for health care (which we already so without acknowledging it). Then, determine if we want to means test for eligibility. Finally, we have to accept the fact that there will be 10% of so of us that just won’t participate.

    We’d quickly find out how many additional people would participate….and we’d do it without the fandango of the PPACA. A working program already exists, all we’d need to do is add some criteria for implementation more widely. But oh no, the social engineers just have to build their own castle from scratch.

    The PPACA is all about power, not health care.

[…] if the failure of the site was due to its extreme popularity (a claim that has justifiably been called into question by many IT experts), the question is: why? A technical blog explained just how stable the Obama […]