With all the attention that the soaring cost of healthcare has been getting over the last few election cycles, it’s easy to assume that this is a new phenomenon, and that back in the “old days”, it wasn’t a concern.
Quoting my own previous papers on the topic: The actual history of healthcare and cost is different, however, and the rising cost of healthcare has been an issue in the US since the early 1920’s, and led to the formation in 1927 of the Committee on the Costs of Medical Care.
Although health insurance firms had been concerned over high medical costs which they identified as being at least partly the result of unnecessary procedures and hospital stays, it was the creation of the Social Security Act of 1965 for Medicare and Medicaid Title XVII and XIX that provided the impetus for a focus on methods to standardize admission and hospital stay decisions.
During this time, there was significant variation between physicians, hospitals, and regions on the use of procedures or inpatient admissions, and it was typical for patients to be admitted for weeks or even months for observation or for procedures that would currently require less than a week or even be performed on an outpatient basis.
The Social Security provisions required clinical evaluation and review, but did not set criteria. In the early 1970’s a Congressional subcommittee estimated that were over two-million unnecessary surgeries per year across the US. As a result, there was a growing requirement for standards regarding procedures and inpatient admissions. 
To give a context of scale, physicians who fail to follow evidence-based clinical criteria add a $500 billion cost burden to U.S. healthcare by providing overly aggressive or ineffective care.  In a study of contribution to cost by cases that do not meet clinical guidelines, Cutler et al found that patient demand was not a significant contributor, but that physician preferences unsupported by clinical evidence accounted for 36% of end-of-life spending, and 17% of total health care spending. 
One approach to reducing costs and controlling the “exuberance” of a free market that would naturally tend towards increasing use of medical products and services, is to have clinical episode of care criteria. Utilization Management criteria can be used prior to encounters (prospective review), as part of the triage and episode of care decisions (concurrent review), or as a quality improvement tool to assess episode of care after the case (retrospective review).
With this backdrop of cost burden, it is clear that UM plays a critical role in provision of appropriate care. UM adds value by reducing the incidence of unnecessary care, and by placing the patient at the most appropriate level of care with the least possible delay. Effective UM supports efficient scheduling of inpatient admissions and procedures by reducing the number of unnecessary admissions, and providing an evidence-based mechanism for admission decisions. Modern UM balances Cost vs. Care through a systematic process and evidence-based criteria.
1. Field, M. J. (1989). Controlling costs and changing patient care?: the role of utilization management. National Academies.
2. Goldberg, C. ‘Cowboy’ Doctors Could Be A Half-A-Trillion-Dollar American Problem, 2014.
3. Cutler, D. Skinner, J. Stern, D. and Wennberg, D. “Physician beliefs and patient preferences: a new look at regional variation in health care spending,” National Bureau of Economic Research, 2013.