gdb: change regcache list to be a map

One regcache object is created for each stopped thread and is stored in
the regcache::regcaches linked list.  Looking up a regcache for a given
thread is therefore in O(number of threads).  Stopping all threads then
becomes O((number of threads) ^ 2).  It becomes noticeable when
debugging thousands of threads, which is typical with GPU targets.  This
patch replaces the linked list with an std::unordered_multimap, indexed
by (target, ptid).

I originally designed it using an std::unordered_map with (target, ptid,
arch) as the key, because that's how lookups are done (in
get_thread_arch_aspace_regcache).  However, the registers_changed_ptid
function, also somewhat on the hot path (it is used when resuming
threads), needs to delete all regcaches associated to a given (target,
ptid) tuple.  Using (target, ptid) as a key allows to do this more
efficiently (see exception below).  If the key of the map was (target,
ptid, arch), we'd have to walk all items of the map.

The lookup (in get_thread_arch_aspace_regcache), walks over all existing
regcaches belonging to this (target, ptid), looking to find the one with
the right arch.  This is ok, as there will be very few regcaches for a
given key (typically one).  Lookups become faster when the number of
threads grows, compared to the linked list.  With a small number of
threads, it will probably be a bit slower to do a map lookup than to
walk a few linked list nodes, but I don't think it will be noticeable in
practice.

The function registers_changed_ptid deletes all regcaches related to a
given (target, ptid).  We must now handle the different cases
separately:

- NULL target and minus_one_ptid: we delete all the entries
- NULL target and non-minus_one_ptid: invalid (checked by assert)
- non-NULL target and non-minus_one_ptid: we delete all the entries
  associated to that tuple, this is done efficiently
- a non-NULL target and minus_one_ptid: we delete all the entries
  associated to that target, whatever the ptid.  This is the slightly
  annoying case, as we can't easily look up all items having this target
  in their key.  I implemented it by walking the list, which is not
  ideal.

The function regcache_thread_ptid_changed is called when a thread
changes ptid.  It is implemented efficiently using the map, although
that's not very important: it is not called often, mostly when creating
an inferior, on some specific platforms.

Note: In hash_target_ptid, I am combining hash values from std::hash by
summing them.  I don't think it's ideal, since std::hash is just the
identity function for base types.  But I don't know what would be better
to reduce the change of collisions.  If anybody has a better idea, I'd
be interested.

This patch is a tiny bit from ROCm GDB [1] we would like to merge
upstream.  Laurent Morichetti gave be these performance numbers:

The benchmark used is:

  time ./gdb --data-directory=data-directory /extra/lmoriche/hip/samples/0_Intro/bit_extract/bit_extract -ex "set pagination off" -ex "set breakpoint pending on" -ex "b bit_extract_kernel if \$_thread == 5" -ex run -ex c -batch

It measures the time it takes to continue from a conditional breakpoint with
2048 threads at that breakpoint, one of them reporting the breakpoint.

baseline:
real    0m10.227s
real    0m10.177s
real    0m10.362s

with patch:
real    0m8.356s
real    0m8.424s
real    0m8.494s

[1] https://github.com/ROCm-Developer-Tools/ROCgdb

gdb/ChangeLog:

	* regcache.c (struct target_ptid): New struct.
	(hash_target_ptid): New struct.
	(target_ptid_regcache_map): New type.
	(regcaches): Change type to target_ptid_regcache_map.
	(get_thread_arch_aspace_regcache): Update to regcaches' new
	type.
	(regcache_thread_ptid_changed): Likewise.
	(registers_changed_ptid): Likewise.
	(regcaches_size): Likewise.
	(regcaches_test): Update.
	(regcache_thread_ptid_changed): Update.
	* gdbsupport/ptid.h (hash_ptid): New struct.

Change-Id: Iabb0a1111707936ca111ddb13f3b09efa83d3402
3 files changed