Re: [RFC] [PATCH] Replace security fields with hashtable

From: Chris Wright (chrisw@private)
Date: Wed Oct 27 2004 - 09:42:19 PDT


* Serge E. Hallyn (serue@private) wrote:
> +/*
> + * lsm_cache_entry: data for a pointer for an lsm.
> + *
> + * 20 bytes
> + */
> +struct lsm_cache_entry {
> +	struct hlist_node list;
> +	void *ptr;
> +	int lsm_id;
> +	void *data;
> +};

Aside from the removal of the primary pointer, this is problematic.
It adds 20 bytes per object on a 32-bit system which is a key 5 times
larger than what we had.  The advantage of a fully flexible hashtable
has to be felt across the board.  Hurting performance for the single
module common case is a no go.

> +#define LSM_CACHE_BUCKETS 16384
> +int lsm_cache_buckets = LSM_CACHE_BUCKETS;
> +static kmem_cache_t *lsm_htable_cache;
> +module_param(lsm_cache_buckets, int, 0);
> +MODULE_PARM_DESC(lsm_cache_buckets, "Number of lsm cache entries to allocate (default 16384)\n");
> +
> +static struct hlist_head *lsm_hash_table;
> +static spinlock_t lsm_spinlock = SPIN_LOCK_UNLOCKED;

This is roughly akin to the performance problems that SELinux has seen
on large SMP machines, where avc access is a serialization point.  This
will hurt the single module common case.

> +static int lsm_hash_bits;
> +
> +static int security_cache_startup(void)
> +{
> +	int i;
> +
> +	lsm_hash_bits = long_log2(lsm_cache_buckets);
> +	if (lsm_cache_buckets != (1 << lsm_hash_bits)) {

This would have to be sized according to memory.

> +/*
> + * Add security data to hash table.
> + * Returns:  0 on success, -ENOMEM on alloc failure.
> + * Called by LSMs, grabs the lsm_spinlock (for now).
> + *
> + * What about the case where an LSM wants to
> + *   if (task->security) release(task->security);
> + *   task->security = new;
> + * Maybe set_value should return a pointer...
> + */
> +int security_set_value(void *ptr, int lsm_id, void *data, int gfp_flags)
> +{
> +	int h = lsm_hash(ptr, lsm_id);
> +	struct hlist_node *tmp;
> +	struct lsm_cache_entry *e, *e2 = NULL;
> +	int ret = 0;
> +
> +	spin_lock(&lsm_spinlock);
> +	hlist_for_each(tmp, &lsm_hash_table[h]) {
> +		e = hlist_entry(tmp, struct lsm_cache_entry, list);
> +		if (e->ptr == ptr && e->lsm_id == lsm_id) {
> +			e->data = data;
> +			goto out_unlock;
> +		}
> +	}
> +
> +
> +	spin_unlock(&lsm_spinlock);
> +	e2 = kmem_cache_alloc(lsm_htable_cache, gfp_flags);
> +	if (!e2) {
> +		return -ENOMEM;
> +	}
> +
> +	spin_lock(&lsm_spinlock);
> +	hlist_for_each(tmp, &lsm_hash_table[h]) {
> +		e = hlist_entry(tmp, struct lsm_cache_entry, list);
> +		if (e->ptr == ptr && e->lsm_id == lsm_id) {
> +			e->data = data;
> +			goto out_free;
> +		}
> +	}

Two lock round trips per set.

I'm pretty skeptical of this approach.  I suspect the performance
numbers will be enough to kill it.

thanks,
-chris
-- 
Linux Security Modules     http://lsm.immunix.org     http://lsm.bkbits.net



This archive was generated by hypermail 2.1.3 : Wed Oct 27 2004 - 09:43:06 PDT