I have spent my week deep in thought on how to secure connections from third-party business partners into my organization. Many of these partners work as an extension of the company, such as outsourced development and operations. These partners have access to source code, business documents, and other sensitive data we would prefer that no one could get to. Data theft is a serious concern, as are other issues, such as a malware infection that hops from a partner's system onto our network.
When I ask my coworkers about this issue, some say to give full, open access, while others advise locking down resources as tight as possible. This is a problem many security professionals wrestle with, and I'm not sure IT has the right solution for every situation.
Traditional theory tells us to use a white list: allow specific source and destination, port and protocol, and only provide access to those we believe to be safe. In dynamic and changing environments, however, this leads to lots of changes for IT and reduced productivity for the requestors. Have you ever developed code only to find out when it moved to production that some firewall rule blocks access and IT can't make the change for a week? I have, and it sucks.
Blacklisting is more effective. You identify what needs protecting and don't allow access to it. I bet there are fewer systems and data that need to be protected than accessed. This approach may require moving some systems or blocking entire subnets, but white listing can lead to the same work.
I am willing to admit this is not a one-size-fits-all approach. If you have a limited number of partners that only need a specific number of static resources, white listing is the way to go. But if you have integrated partners with ever-expanding responsibilities, evaluate blacklisting as a serious alternative. Once the access control method is in place, be it a white list or black list, other protections can be layered on as needed to identify and respond to access violations or attacks.