Earlier this week I was having an interesting conversation with a colleague in the Infosec industry, and the topic of “Is the internet broken beyond repair” came up. I don’t mean to sound like a defeatist, but the internet was not designed for the current use-case. Each year there are all sorts of vendor research papers about the biggest risks, new TTPs, etc. – and each year we seem to see more of the same.
There is also a constant echoing of “Just implement the basics” (myself being guilty of this). Is that really the answer though? If those controls and processes were really “basic” wouldn’t the majority of organization’s already have them in place? My fiend brought up the Project Phoenix, which was definitely an interesting idea. After thinking about this for a while, and continuing to work through my SEC504 coursework, I thought the Zero Trust model is worth including in the conversation.
Ori Eisen’s paper around Internet 2.0 brings a log of interesting thoughts to the table. Rather than replacing the internet wholesale, Ori is looking to see a second version be built out in parallel, and only used where security is required (e.g. payment exchanges). At a high level, Ori’s proposal calls for end-to-end management of: Registration, Jurisdiction, Monitoring, Enforcement, and Technology. This puts the major focus on security at the cost of privacy and convenience – something many users will shy away from.
This network concept was originally introduced by Forrester, and Google has really taken the idea and run with it by making a ton of information publicly available. The whole approach is to assume all networks and devices are insecure and cannot be trusted. Each step along the way (system and data access) requires a verification.
Will we keep running with business as usual, adding more agents, more layers, etc.? The world is only getting more connected, and security will have to continue to fight it’s way into the mainstream.