Repetitions not only dump prices, they also prevent consistency.
I used to work a lot for agencies who served one big client, usually tech companies. They always demanded one particular CAT tool and would not pay you less for matches (previous translations stored in the database). So you were encouraged to reuse old translations, even if were really bad.
So for example, all the matches would be written in the passive voice, use wrong terms or be parrotting the source text, because obviously the previous translator had a) no idea or b) was using machine translation.
My translation would be very different, and so the output text would be an inconsistent wad that no one would bother to read.
I would even offer the agency’s project manager to batch-update the entire translation memory for free, but it wasn’t possible. Many CAT tools, most notably Across, don’t allow translators to change any entries in the translation memory. They are intended to collect work from countless anonymous contributors, who have no clue.
Many of the successful CAT tools obstruct translation processes, not just for service providers, but also for the client. So your data becomes so “secure,” even you can’t touch it anymore. The 100% matches are automatically inserted into the source text and locked, so you can’t edit bad pretranslations.
This was basically what is now happening with MT post-editing, but without the ability to post-edit. I’ve nothing against editing other people’s (or machines’) work, but it just isn’t faster than translating it right the first time.
This was around 2010. I’ve since stopped accepting high-repetition projects or projects where I have to use a specific CAT tool – even if they can pay quite well if you do it right. I do use my own CAT tool and translation memories provided by the client, but I will take the time (and money) to make the old matches consistent.