Monday, May 18, 2009

Thoughts on Distributed Firewalls

Author: Steven M. Bellovin

BibTex:

@ARTICLE{Bellovin99distributedfirewalls,
author = {Steven M. Bellovin},
title = {Distributed Firewalls},
journal = {IEEE Communications Magazine},
year = {1999},
volume = {32},
pages = {50--57}
}

also: S. M. Bellovin. Distributed Firewalls. ;login: magazine, special issue on security, November 1999.

Summary:

Centralize policy, distribute enforcement. The end-host knows exactly what is going on in the machine and can make more informed decisions. Distributed enforcement no longer depends on the topology of a network.

Use IPSEC certs to authenticate hosts. Use hybrid firewalls (both distributed and traditional) with app proxies for complex app rules.

Claim that:
  • Can do application level filtering by distributing application rules (e.g. no Javascript in browsers)
  • Can protect machines in internal network from each other
  • Can understand app level semantics
  • Can protect mobile nodes
  • Can manage changes such as updates by distributing new certs and lowering privileges for old certs
Threat comparison between traditional and distributed:
  • Service exposure and port scanning: Comparable
  • App level proxies: conventional in most cases wins
  • DoS: Distributed wins
  • IDS: conventional easier, though distributed can gather more data.
  • Insider attacks: comparable
The claim the author believes is the most important is freedom from topology restrictions.

The Good:
Yes it is definitely a great idea to centralize policy and make sure that it is consistent across multiple security mechanisms. It is also true that the end-host knows more about a connection and thus can enforce more things. Distributed firewalls also free the security policy from relying on topology.

The Bad:
  • Enforcement on the end-host is a bad idea because it is easy to subvert the system (as was noted in the paper). This would make the admin's policies useless.
  • The author mentions application level enforcement, but that means that the firewall must somehow know what each application is doing and for example realizae that some html has javascript or the like. That is a very difficult thing to do in a generic way. How would one keep track of all the new applications that come up? Indeed, the author mentions this is a problem just before the conclusion.
  • The argument that conventional firewalls are more susceptible to DoS attacks is not quite correct. Firewall boxes are usually much more powerful than end-hosts, and so end-hosts are the ones that are more susceptible to DoS attacks. In addition, a DoS attack against an enterprise that does not have a conventional firewall at the point where it connects to the Internet can suffer from a DoS attack that extends deeper into the network such as NFS DoS.
The paper was well-written, though I do not feel it fixes much. Enforcement has to be impossible to circumvent so that an administrator always has the final word. ident++ deals with this issue exactly. In addition, ident++ moves the enforcement away from the end-host so it doesn't get DoS'ed. It also asks both end points of a connection whether the connection is allowed, making sure no bandwidth is wasted anywhere and no host misbehaves.

Thursday, May 14, 2009

Thoughts on Trends in Mobile and Wireless Talk

Speaker: Dr. Nambi Seshadri, CTO Mobile and Wireless at Broadcom

Date: 5/13/2009

Summary:

Dr. Seshadri pointed out the trends and technologies that shaped the first three generations of wireless, then he discussed the trends that will shape the fourth.

  1. 802.11 FH/DS-SS, Analog Voice

    • Frequency Reuse

    • Handoffs

    • cell splitting



  2. 802.11/a/b/g, bluetooth, Digital Voice, SMS, GPRS

    • Digital communication over fading channels/Viterbi Algorithm

    • Digital speech compression (CELP)

    • VLSI



  3. 100-600 Mbps WLAN, 480-1000Mbps WPAN, 802.11n MIMO, MMS, EDGE/WCDMA, EVDO/HSDPA

    • CMOS RF

    • Java/browsers/email/sync

    • camera/mp3

    • open OS (Symbian, Windows mobile, LiMo, Android)



  4. >1Gbps WLAN/WPAN, Broadband Multimedia/Video

    • Ubiquitous broadband

    • Convergence on "Open" platform

    • Industry transformation: OS free (Android/Symbian), Make money from Content/service/apps/advertising
Then he talked about technologies shaping M 4.0 (mobile 4th gen) and he mentioned computing power, sensors, positioning, open OS, security, high BW connectivity, alternative power sources (inductive power), health monitoring...

Then he talked a while about Ubiquitous positioning using Assisted GPS: Using GNSS satellites, Wifi hotspots, CDMA towers, NMR/MRL measuring Rx power, Cell ID, Digital TV towers...

Multimodal (multiple wireless modes) chips are driving cost down.

In all, it was an OK summary, nothing that sparks fantasies or that has any depth.

Wednesday, May 13, 2009

Thoughts on The Collective: A Cache-Based System Management Architecture

Authors: Ramesh Chandra, Nickolai Zeldovich, Constantine Sapuntzakis, Monica S. Lam

BibTeX:

@INPROCEEDINGS{thecollective,
author = {Ramesh Chandra and Nickolai Zeldovich and Constantine Sapuntzakis and Monica S. Lam},
title = {The Collective: A Cache-Based System Management Architecture},
booktitle = {In Proc. 2nd Symposium on Networked Systems Design & Implementation (NSDI},
year = {2005},
pages = {259--272}
}

Summary:

The Collective is a system that allows users to load virtual appliances from a cloud, cache them locally, and run them. There are two types of data: User and Appliance. User data is mutable by the user and is stored and backed up online as the user modifies it (this include things like docs and profiles). Appliance data is immutable (except by an admin) and a pristine unmodified copy is re-run every time a VM is started with that appliance.

The Collective simplifies deployment and management. All machines run Virtual Appliance Transciever software (VAT) that contains the VMM and provides an interface for the user to login, select an appliance, and access his data. The VAT self-updates without requiring any intervention from the user.

Appliances can be updated by an admin and on the next reboot, a user would use the updated image. Updates are tracked by versioning using a simple numbering and directory hierarchy scheme. When downloaded, they are stored using Copy-on-Write (COW) disk caches for large blocks and use replication for small meta-data. It is possible to cache full appliance images so that a user can work offline disconnected from the appliance repo.

The evaluation section is very well constructed and nicely sums up how well the system works. They found prefetching works, and using traces to decide what to prefetch can be a significant boon to performance. Interactive and I/O intensive work is as expected: bad. Maintenance/upgrading/deploying is easy and painless.

The Good:
I've been thinking about a system like this for a while, and they've taken it to completion and even started a company with it (Mocha5). I think the architecture is sound but is limited by the technology (bandwidth, virtualization, ...). I enjoyed reading the experience section because it was an evaluation of whether the system met its goals. I will definitely read their eval and experience section again before writing my next paper.

The Bad:
Unfortunately, I/O bound and interactive ops are horrible. It's not clear if much can be done to remedy the situation, but since the Collective will be used for interactive apps and to manage users' desktops, it seems that the Collective is unusable. I would really like to manage my machines this way, but I'm already sick of Windows running so slowly in a VM.

They have not mentioned much about security. They only said they use SSH to transfer data and perform authentication. I guess this was not a concern and they assumed that only important thing was to make sure that the appliance itself did not get compromised. I would argue that the important piece is for the user data not to be compromised. Maybe the assumption they run with is that if a VM is secured and patched then the user data will be safe.

Monday, May 11, 2009

Thoughts on Practical Declarative Network Management

Authors: Timothy Hinrichs, Natasha Gude, Martin Casado, John Mitchell, Scott Shenker

Published in: Workshop on Research in Enterprise Networks (WREN) 2009

Summary:
The authors designed, implemented, and tested a language for declarative network policies --- Flow Management Language (FML). The language has no ordering and is declarative. Conflicts in rules can arise and are handled by prioritizing keywords (e.g. deny has higher priority over allow). Because the language has no ordering the authors claim it is easier to write and reason about and is can accommodate multiple authors more easily. Application developers can have control over flows and can write rules themselves.

They implemented the FML engine over Nox and deployed it on two campuses. The implementation can process rules in linear time and can handle heavy loads. The implementation was done by using trees in order to evaluate rules. The deployment seems to be working, and they had the ability to introduce new keywords (HttpRedirect) into the language to handle complex cases.

The Good:
The paper makes it clear that FML is a flexible language that can be used to declare network policies nicely. But not only is it flexible, concise, and based on principals, it can be evaluated in linear time. The implementation and deployment provides good evaluation. The tree implementation for implementation is a very cute idea. I think I will be using it for my stuff :) I liked the fact that new keywords can be added, but is it easy to do?

The Bad:
The authors argue that ordering is a bad idea. Yet, they end up enforcing some form of ordering by making policies cascade. OK, so it's not ordered within a single cascade, but I'm not sure you would write rules without cascades. They did not make the case for lack of ordering well. I think it is really really difficult to reason about flows in the case of conflicts. I do not see how the combination of conflicts + keyword prioritization + cascades is simpler than a total ordering on all statements. The argument about making it easier for multiple authors does not really hold water because it seems that authors would have priority, and you don't want conflicts anywhere. In fact, I don't see how multiple authors can operate easily in FML, even with cascades!

Unfortunately, I don't see how application developers (or do they mean Nox applications?) can control their own flows. Is their an API? I thought FML was pre-compiled, how do users add their own stuff there?

The Ugly:
I really did not like the non-ordering, cascading, and keyword prioritization stuff. It seemed to make things much more complex. The fact that conflicts are allowed negates a whole bunch of things they had said about making it clearer to reason about things. I don't see its necessity, or maybe there is some necessity (such as to make it easier for implementation) but I don't recall it was mentioned in the paper.

Wednesday, May 6, 2009

Thoughts on SCADS: Scale-Independent Storage for Social Computing Applications

Authors: Michael Armburst, Armando Fox, David Patterson, Nick Lanham, Beth Trushkowsky, Jesse Trutna, Haruki Oh

BibTeX:
@proceedings { citation185,
title = {{SCADS}: Scale-independent storage for social computing applications},
year = {2009},
month = {01/2009},
booktitle={Conference on Innovative Data Systems Research {(CIDR)}}
URL = {http://www-db.cs.wisc.edu/cidr/cidr2009/Paper_86.pdf},
author = {Michael Armbrust and Armando Fox and David Patterson and Nick Lanham and Haruki Oh and Beth Trushkowsky and Jesse Trutna}
}

Summary:
The authors are working on a framework that allows developers to scale up as well as scale down easily. There are three main parts to this: A query language that does not compromise performance with scale, a declarative policy that allows developers to specify performance levels that the framework should achieve, and machine learning algorithms to rapidly scale up and down. One of the main points is that consistency can be traded for performance.

The Good:
The system looks cool. It would be nice to have the performance of memcached with a flexible query language. It's nice to throw all the load on some machine learning algorithm to scale up and down.

The Bad:
System is still preliminary.

The Ugly:

This seems a huge amount of work. I wonder if it will ever work.

Tuesday, May 5, 2009

Thoughts on Efficient Instantiations of Tweakable Block Ciphers and Refinements to Modes OCB and PMAC

Author: Phillip Rogaway

Summary:
Tweakable block ciphers are ciphers that take three inputs (key, tweak, block) instead of the usual two (key, block) such that different tweaks can create different permutations that are all still secure. This eliminates the need for changing keys if we want to have a different block cipher. The paper describes how efficient tweakable block ciphers can be constructed from regular block ciphers such that tweaks can be incremented cheaply, while keeping the tweaked cipher secure.

The paper then describes changes to OCB and PMAC that make the two algorithms easier to understand and their security simpler to prove.

The Good:
I'm not a cryptographer, so the fact that I understood most of the paper is very significant. The paper is well written, and somewhat easy to understand. I didn't go through the proofs, so I can't say anything about them. It's a good introduction to tweakable block ciphers, and what can be done with them. The constructions they use for making tweakable block ciphers easy and efficient to construct are nice and seem to be quite useful.

The Bad:
I didn't really get what else I can do using these tweakable block ciphers and the instantiations Rogaway came up with. I wish he explained more plainly what these can be used for other than improving OCB and PMAC.

The Ugly:
The notation was a little annoying. It was difficult to keep track of what the tildas and bars meant. While consistent, the notation was difficult to follow because there were too many symbols introduced or used in place of others "to simplify the notation".

Thoughts on Hurry Down Sunshine

Author: Michael Greenberg

Summary:
The author relates the story of when his daughter had her first manic attack, i.e. went mad. She was hospitalized, and then released. Her mania was kept a secret. Sally herself appears to be a very smart and poetic kid. The author was living in a state of frustration and confusion for the summer. The book has a significant number of anecdotes on other famous people and their dealings with mania like James Joyce with his daughter. The author also relates the stories of other members of his family.

The Good:
The book was a wonderful read. It was simple to understand, easy to read, and very captivating. I found it difficult to put the book down. The book offered some very interesting insights into mania and how it affects the victim and her family. The book also has some intersting facts on mania to teach.

The Bad:
None really.

The Ugly:
None really.