As always, the answer is it depends on the content-addressable storage architecture. But some vendors put the limitation at around 80 million to 150 million objects. I know, wow; who will need more than 150 million objects?
Probably YOU in a few years -- maybe even now. For example, assume the best case scenario that you start to see a performance hit at 150 million objects. Let's assume that the average file size is 250k. That means 4 files per MB, 4,000 per Gbyte and 4 million per Tbyte. So 150 million objects or files would equal about 37.5 Tbytes of archive capacity.
Depending on the average file size with some of the CAS archives, you actually could run out of object count space prior to running out of capacity.
For an archive, 37.5 Tbyte is not very much when we are talking about storage that is supposed to hold years and years of information. As we talk to end users about archive design, more and more frequently we are planning for petabytes of information that needs to be stored. The real gotcha is whether or not 250k is a valid number for average file size.