import Lion libdispatch-187.5 source drop
git-svn-id: https://svn.macosforge.org/repository/libdispatch/branches/Lion@202 5710d607-3af0-45f8-8f96-4508d4f60227
diff --git a/INSTALL b/INSTALL
index bed7388..69fd5a6 100644
--- a/INSTALL
+++ b/INSTALL
@@ -13,6 +13,7 @@
compile-time configuration options that should be reviewed before starting.
An uncustomized install requires:
+ sh autogen.sh
./configure
make
make install
@@ -24,6 +25,11 @@
Specify the path to Apple's Libc package, so that appropriate headers
can be found and used.
+--with-apple-libclosure-source
+
+ Specify the path to Apple's Libclosure package, so that appropriate headers
+ can be found and used.
+
--with-apple-xnu-source
Specify the path to Apple's XNU package, so that appropriate headers
@@ -36,61 +42,40 @@
Mac OS X, where the Blocks runtime is included in libSystem, but is
required on FreeBSD.
-Some sites will wish to build using a non-default C compiler; for example,
-this is desirable on FreeBSD so that libdispatch is built with clang and
-blocks support. A typically FreeBSD configuration will use:
-
- CC=clang ./configure --with-blocks-runtime=/usr/local/lib
- make
- make install
-
-The following options are likely only to be required if building libdispatch
-as part of Mac OS X's libSystem:
-
---enable-legacy-api
-
- Enable a legacy (deprecated) API used by some early GCD applications.
+The following options are likely to only be useful when building libdispatch
+on Mac OS X as a replacement for /usr/lib/system/libdispatch.dylib:
--disable-libdispatch-init-constructor
Do not tag libdispatch's init routine as __constructor, in which case
it must be run manually before libdispatch routines can be called.
- For the libdispatch code compiled directly into libSystem, the init
- routine is called automatically during process start.
-
---enable-apple-crashreporter-info
-
- Set global variables during a libdispatch crash to provide additional
- debugging information for CrashReporter.
+ For the libdispatch library in /usr/lib/system, the init routine is called
+ automatically during process start.
--enable-apple-tsd-optimizations
- Use a non-portable allocation scheme for pthread per-thread data
- (TSD) keys when built into libSystem on Mac OS X. This should not be
- used on other OS's, nor on Mac OS X when building as a stand-alone
- library.
-
---enable-apple-semaphore-optimizations
-
- libdispatch contains hand-optimized assembly for use with libdispatch
- semaphores.
+ Use a non-portable allocation scheme for pthread per-thread data (TSD)
+ keys when building libdispatch for /usr/lib/system on Mac OS X. This
+ should not be used on other OS's, or on Mac OS X when building a
+ stand-alone library.
Typical configuration commands
-The following command lines create the default config.h required to build
-libdispatch with libSystem in Mac OS X Snow Leopard:
+The following command lines create the configuration required to build
+libdispatch for /usr/lib/system on Mac OS X Lion:
sh autogen.sh
- ./configure \
- --with-apple-libc-source=/path/to/10.6.0/Libc-583 \
- --with-apple-xnu-source=/path/to/10.6.0/xnu-1456.1.26 \
- --enable-legacy-api \
- --disable-libdispatch-init-constructor \
- --enable-apple-crashreporter-info \
- --enable-apple-tsd-optimizations \
- --enable-apple-semaphore-optimizations
+ ./configure CFLAGS='-arch x86_64 -arch i386' \
+ --prefix=/usr --libdir=/usr/lib/system \
+ --disable-dependency-tracking --disable-static \
+ --disable-libdispatch-init-constructor \
+ --enable-apple-tsd-optimizations \
+ --with-apple-libc-source=/path/to/10.7.0/Libc-763.11 \
+ --with-apple-libclosure-source=/path/to/10.7.0/libclosure-53 \
+ --with-apple-xnu-source=/path/to/10.7.0/xnu-1699.22.73
-Typical configuration line for FreeBSD 8.x and 9.x:
+Typical configuration line for FreeBSD 8.x and 9.x to build libdispatch with
+clang and blocks support:
sh autogen.sh
- CC=clang ./configure --with-blocks-runtime=/usr/local/lib
+ ./configure CC=clang --with-blocks-runtime=/usr/local/lib
diff --git a/Makefile.am b/Makefile.am
index 553f105..4e3167c 100644
--- a/Makefile.am
+++ b/Makefile.am
@@ -1,10 +1,22 @@
#
#
#
+
ACLOCAL_AMFLAGS = -I m4
-SUBDIRS= \
- dispatch \
- man \
- src \
- testing
+SUBDIRS= \
+ dispatch \
+ man \
+ private \
+ src
+
+EXTRA_DIST= \
+ LICENSE \
+ PATCHES \
+ autogen.sh \
+ config/config.h \
+ libdispatch.xcodeproj \
+ resolver \
+ tools \
+ xcodeconfig \
+ xcodescripts
diff --git a/PATCHES b/PATCHES
new file mode 100644
index 0000000..4f88387
--- /dev/null
+++ b/PATCHES
@@ -0,0 +1,194 @@
+The libdispatch project exists in a parallel open source repository at:
+ http://svn.macosforge.org/repository/libdispatch/trunk
+
+Externally committed revisions are periodically synchronized back to the
+internal repository (this repository).
+
+Key:
+ APPLIED: change set was applied to internal repository.
+ INTERNAL: change set originated internally (i.e. already applied).
+ SKIPPED: change set was skipped.
+
+[ 1] SKIPPED
+[ 2] SKIPPED
+[ 3] INTERNAL rdar://problem/7148356
+[ 4] APPLIED rdar://problem/7323245
+[ 5] APPLIED rdar://problem/7323245
+[ 6] APPLIED rdar://problem/7323245
+[ 7] APPLIED rdar://problem/7323245
+[ 8] APPLIED rdar://problem/7323245
+[ 9] APPLIED rdar://problem/7323245
+[ 10] APPLIED rdar://problem/7323245
+[ 11] APPLIED rdar://problem/7323245
+[ 12] APPLIED rdar://problem/7323245
+[ 13] SKIPPED
+[ 14] APPLIED rdar://problem/7323245
+[ 15] APPLIED rdar://problem/7323245
+[ 16] APPLIED rdar://problem/7323245
+[ 17] APPLIED rdar://problem/7323245
+[ 18] APPLIED rdar://problem/7323245
+[ 19] APPLIED rdar://problem/7323245
+[ 20] APPLIED rdar://problem/7323245
+[ 21] APPLIED rdar://problem/7323245
+[ 22] APPLIED rdar://problem/7323245
+[ 23] APPLIED rdar://problem/7323245
+[ 24] APPLIED rdar://problem/7323245
+[ 25] APPLIED rdar://problem/7323245
+[ 26] APPLIED rdar://problem/7323245
+[ 27] APPLIED rdar://problem/7323245
+[ 28] APPLIED rdar://problem/7323245
+[ 29] APPLIED rdar://problem/7323245
+[ 30] SKIPPED
+[ 31] APPLIED rdar://problem/7323245
+[ 32] APPLIED rdar://problem/7323245
+[ 33] APPLIED rdar://problem/7323245
+[ 34] APPLIED rdar://problem/7323245
+[ 35] SKIPPED
+[ 36] APPLIED rdar://problem/7323245
+[ 37] APPLIED rdar://problem/7323245
+[ 38] APPLIED rdar://problem/7323245
+[ 39] APPLIED rdar://problem/7323245
+[ 40] APPLIED rdar://problem/7323245
+[ 41] APPLIED rdar://problem/7323245
+[ 42] APPLIED rdar://problem/7323245
+[ 43] APPLIED rdar://problem/7323245
+[ 44] APPLIED rdar://problem/7323245
+[ 45] APPLIED rdar://problem/7323245
+[ 46] APPLIED rdar://problem/7323245
+[ 47] APPLIED rdar://problem/7323245
+[ 48] APPLIED rdar://problem/7323245
+[ 49] APPLIED rdar://problem/7323245
+[ 50] APPLIED rdar://problem/7323245
+[ 51] APPLIED rdar://problem/7323245
+[ 52] APPLIED rdar://problem/7323245
+[ 53] APPLIED rdar://problem/7323245
+[ 54] APPLIED rdar://problem/7323245
+[ 55] APPLIED rdar://problem/7323245
+[ 56] APPLIED rdar://problem/7323245
+[ 57] APPLIED rdar://problem/7323245
+[ 58] APPLIED rdar://problem/7323245
+[ 59] APPLIED rdar://problem/7323245
+[ 60] APPLIED rdar://problem/7323245
+[ 61] APPLIED rdar://problem/7323245
+[ 62] APPLIED rdar://problem/7323245
+[ 63] APPLIED rdar://problem/7323245
+[ 64] APPLIED rdar://problem/7323245
+[ 65] APPLIED rdar://problem/7323245
+[ 66] APPLIED rdar://problem/7323245
+[ 67] APPLIED rdar://problem/7323245
+[ 68] APPLIED rdar://problem/7323245
+[ 69] APPLIED rdar://problem/7323245
+[ 70] APPLIED rdar://problem/7323245
+[ 71] INTERNAL
+[ 72] INTERNAL
+[ 73] APPLIED rdar://problem/7531526
+[ 74] APPLIED rdar://problem/7531526
+[ 75]
+[ 76]
+[ 77]
+[ 78]
+[ 79] APPLIED rdar://problem/7531526
+[ 80] APPLIED rdar://problem/7531526
+[ 81] APPLIED rdar://problem/7531526
+[ 82] APPLIED rdar://problem/7531526
+[ 83] APPLIED rdar://problem/7531526
+[ 84] APPLIED rdar://problem/7531526
+[ 85]
+[ 86]
+[ 87] APPLIED rdar://problem/7531526
+[ 88] APPLIED rdar://problem/7531526
+[ 89] APPLIED rdar://problem/7531526
+[ 90]
+[ 91]
+[ 92]
+[ 93]
+[ 94]
+[ 95]
+[ 96] APPLIED rdar://problem/7531526
+[ 97] APPLIED rdar://problem/7531526
+[ 98]
+[ 99]
+[ 100]
+[ 101]
+[ 102]
+[ 103] APPLIED rdar://problem/7531526
+[ 104] APPLIED rdar://problem/7531526
+[ 105]
+[ 106] APPLIED rdar://problem/7531526
+[ 107] SKIPPED
+[ 108] SKIPPED
+[ 109] SKIPPED
+[ 110] SKIPPED
+[ 111] SKIPPED
+[ 112] APPLIED rdar://problem/7531526
+[ 113] SKIPPED
+[ 114] APPLIED rdar://problem/7531526
+[ 115] APPLIED rdar://problem/7531526
+[ 116] APPLIED rdar://problem/7531526
+[ 117] SKIPPED
+[ 118] APPLIED rdar://problem/7531526
+[ 119] SKIPPED
+[ 120] APPLIED rdar://problem/7531526
+[ 121] SKIPPED
+[ 122] SKIPPED
+[ 123] SKIPPED
+[ 124] SKIPPED
+[ 125] APPLIED rdar://problem/7531526
+[ 126] SKIPPED
+[ 127] APPLIED rdar://problem/7531526
+[ 128]
+[ 129]
+[ 130]
+[ 131]
+[ 132]
+[ 133]
+[ 134]
+[ 135]
+[ 136]
+[ 137] APPLIED rdar://problem/7647055
+[ 138] SKIPPED
+[ 139] APPLIED rdar://problem/7531526
+[ 140] APPLIED rdar://problem/7531526
+[ 141] APPLIED rdar://problem/7531526
+[ 142] APPLIED rdar://problem/7531526
+[ 143]
+[ 144] APPLIED rdar://problem/7531526
+[ 145] APPLIED rdar://problem/7531526
+[ 146] APPLIED rdar://problem/7531526
+[ 147]
+[ 148]
+[ 149]
+[ 150]
+[ 151] APPLIED rdar://problem/7531526
+[ 152] APPLIED rdar://problem/7531526
+[ 153]
+[ 154] APPLIED rdar://problem/7531526
+[ 155]
+[ 156]
+[ 157] APPLIED rdar://problem/7531526
+[ 158]
+[ 159]
+[ 160]
+[ 161]
+[ 162] APPLIED rdar://problem/7531526
+[ 163] APPLIED rdar://problem/7531526
+[ 164]
+[ 165]
+[ 166] APPLIED rdar://problem/7531526
+[ 167] APPLIED rdar://problem/7531526
+[ 168]
+[ 169] APPLIED rdar://problem/7531526
+[ 170] APPLIED rdar://problem/7531526
+[ 171] APPLIED rdar://problem/7531526
+[ 172] APPLIED rdar://problem/7531526
+[ 173] APPLIED rdar://problem/7531526
+[ 174] APPLIED rdar://problem/7531526
+[ 175] APPLIED rdar://problem/7531526
+[ 176] APPLIED rdar://problem/7531526
+[ 177] APPLIED rdar://problem/7531526
+[ 178]
+[ 179] APPLIED rdar://problem/7531526
+[ 180] APPLIED rdar://problem/7531526
+[ 181]
+[ 182]
+[ 183] INTERNAL rdar://problem/7581831
diff --git a/config/config.h b/config/config.h
index 3ffef45..040bf21 100644
--- a/config/config.h
+++ b/config/config.h
@@ -1,29 +1,14 @@
/* config/config.h. Generated from config.h.in by configure. */
/* config/config.h.in. Generated from configure.ac by autoheader. */
-/* Define to compile out legacy API */
-/* #undef DISPATCH_NO_LEGACY */
-
/* Define to 1 if you have the declaration of `CLOCK_MONOTONIC', and to 0 if
you don't. */
#define HAVE_DECL_CLOCK_MONOTONIC 0
-/* Define to 1 if you have the declaration of `CLOCK_REALTIME', and to 0 if
- you don't. */
-#define HAVE_DECL_CLOCK_REALTIME 0
-
/* Define to 1 if you have the declaration of `CLOCK_UPTIME', and to 0 if you
don't. */
#define HAVE_DECL_CLOCK_UPTIME 0
-/* Define to 1 if you have the declaration of `EVFILT_LIO', and to 0 if you
- don't. */
-#define HAVE_DECL_EVFILT_LIO 0
-
-/* Define to 1 if you have the declaration of `EVFILT_SESSION', and to 0 if
- you don't. */
-#define HAVE_DECL_EVFILT_SESSION 1
-
/* Define to 1 if you have the declaration of `FD_COPY', and to 0 if you
don't. */
#define HAVE_DECL_FD_COPY 1
@@ -36,10 +21,6 @@
don't. */
#define HAVE_DECL_NOTE_REAP 1
-/* Define to 1 if you have the declaration of `NOTE_REVOKE', and to 0 if you
- don't. */
-#define HAVE_DECL_NOTE_REVOKE 1
-
/* Define to 1 if you have the declaration of `NOTE_SIGNAL', and to 0 if you
don't. */
#define HAVE_DECL_NOTE_SIGNAL 1
@@ -100,9 +81,6 @@
/* Define if __builtin_trap marked noreturn */
#define HAVE_NORETURN_BUILTIN_TRAP 1
-/* Define if __private_extern__ present */
-#define HAVE_PRIVATE_EXTERN 1
-
/* Define to 1 if you have the `pthread_key_init_np' function. */
#define HAVE_PTHREAD_KEY_INIT_NP 1
@@ -139,9 +117,6 @@
/* Define to 1 if you have the <sys/stat.h> header file. */
#define HAVE_SYS_STAT_H 1
-/* Define to 1 if you have the <sys/sysctl.h> header file. */
-#define HAVE_SYS_SYSCTL_H 1
-
/* Define to 1 if you have the <sys/types.h> header file. */
#define HAVE_SYS_TYPES_H 1
@@ -165,23 +140,17 @@
#define PACKAGE_NAME "libdispatch"
/* Define to the full name and version of this package. */
-#define PACKAGE_STRING "libdispatch 1.0"
+#define PACKAGE_STRING "libdispatch 1.1"
/* Define to the one symbol short name of this package. */
#define PACKAGE_TARNAME "libdispatch"
/* Define to the version of this package. */
-#define PACKAGE_VERSION "1.0"
+#define PACKAGE_VERSION "1.1"
/* Define to 1 if you have the ANSI C header files. */
#define STDC_HEADERS 1
-/* Define to use Mac OS X crashreporter info */
-#define USE_APPLE_CRASHREPORTER_INFO 1
-
-/* Define to use non-portablesemaphore optimizations for Mac OS X */
-#define USE_APPLE_SEMAPHORE_OPTIMIZATIONS 1
-
/* Define to use non-portable pthread TSD optimizations for Mac OS X) */
#define USE_APPLE_TSD_OPTIMIZATIONS 1
@@ -195,7 +164,7 @@
/* #undef USE_POSIX_SEM */
/* Version number of package */
-#define VERSION "1.0"
+#define VERSION "1.1"
/* Define to 1 if on AIX 3.
System headers sometimes define this.
@@ -219,6 +188,9 @@
/* Define to 1 if you need to in order for `stat' and other things to work. */
/* #undef _POSIX_SOURCE */
+/* Define if using Darwin $NOCANCEL */
+#define __DARWIN_NON_CANCELABLE 1
+
/* Enable extensions on Solaris. */
#ifndef __EXTENSIONS__
# define __EXTENSIONS__ 1
@@ -229,6 +201,3 @@
#ifndef _TANDEM_SOURCE
# define _TANDEM_SOURCE 1
#endif
-
-/* Define to a replacement for __private_extern */
-/* #undef __private_extern__ */
diff --git a/configure.ac b/configure.ac
index 1f02ed3..eeba91b 100644
--- a/configure.ac
+++ b/configure.ac
@@ -3,13 +3,11 @@
#
AC_PREREQ(2.59)
-AC_INIT([libdispatch], [1.0], [libdispatch@macosforge.org], [libdispatch])
+AC_INIT([libdispatch], [1.1], [libdispatch@macosforge.org], [libdispatch])
AC_REVISION([$$])
AC_CONFIG_AUX_DIR(config)
AC_CONFIG_HEADER([config/config.h])
AC_CONFIG_MACRO_DIR([m4])
-AC_PROG_CC([clang gcc cc])
-AC_USE_SYSTEM_EXTENSIONS
AM_MAINTAINER_MODE
#
@@ -26,15 +24,29 @@
)
AC_SUBST([APPLE_LIBC_SOURCE_PATH])
+AC_ARG_WITH([apple-libclosure-source],
+ [AS_HELP_STRING([--with-apple-libclosure-source],
+ [Specify path to Apple libclosure source])],
+ [apple_libclosure_source_path=${withval}
+ APPLE_LIBCLOSURE_SOURCE_PATH=-I$apple_libclosure_source_path
+ CPPFLAGS="$CPPFLAGS -I$apple_libclosure_source_path"],
+ [APPLE_LIBCLOSURE_SOURCE_PATH=]
+)
+AC_SUBST([APPLE_LIBCLOSURE_SOURCE_PATH])
+
AC_ARG_WITH([apple-xnu-source],
[AS_HELP_STRING([--with-apple-xnu-source],
[Specify path to Apple XNU source])],
[apple_xnu_source_path=${withval}/libkern
APPLE_XNU_SOURCE_PATH=-I$apple_xnu_source_path
- CPPFLAGS="$CPPFLAGS -I$apple_xnu_source_path"],
+ CPPFLAGS="$CPPFLAGS -I$apple_xnu_source_path"
+ apple_xnu_source_system_path=${withval}/osfmk
+ APPLE_XNU_SOURCE_SYSTEM_PATH=$apple_xnu_source_system_path],
[APPLE_XNU_SOURCE_PATH=]
)
AC_SUBST([APPLE_XNU_SOURCE_PATH])
+AC_SUBST([APPLE_XNU_SOURCE_SYSTEM_PATH])
+AM_CONDITIONAL(USE_XNU_SOURCE, [test -n "$apple_xnu_source_system_path"])
AC_CACHE_CHECK([for System.framework/PrivateHeaders], dispatch_cv_system_privateheaders,
[AS_IF([test -d /System/Library/Frameworks/System.framework/PrivateHeaders],
@@ -45,22 +57,8 @@
)
#
-# Try to build the legacy API only if specifically requested.
-#
-AC_ARG_ENABLE([legacy-api],
- [AS_HELP_STRING([--enable-legacy-api], [Enable legacy (deprecated) API.])]
-)
-
-AS_IF([test "x$enable_legacy_api" != "xyes"],
- [use_legacy_api=false
- AC_DEFINE(DISPATCH_NO_LEGACY, 1,[Define to compile out legacy API])],
- [use_legacy_api=true]
-)
-AM_CONDITIONAL(USE_LEGACY_API, $use_legacy_api)
-
-#
-# On Mac OS X Snow Leopard, libpispatch_init is automatically invoked during
-# libsyscall process setup. On other systems, it is tagged as a library
+# On Mac OS X, libpispatch_init is automatically invoked during libSystem
+# process initialization. On other systems, it is tagged as a library
# constructor to be run by automatically by the runtime linker.
#
AC_ARG_ENABLE([libdispatch-init-constructor],
@@ -74,23 +72,7 @@
)
#
-# Whether or not to include/reference a crashreporter symbol.
-#
-AC_ARG_ENABLE([apple-crashreporter-info],
- [AS_HELP_STRING([--enable-apple-crashreporter-info],
- [Use Mac OS X crashreporter info])]
-)
-
-AS_IF([test "x$enable_apple_crashreporter_info" = "xyes"],
- [AC_DEFINE(USE_APPLE_CRASHREPORTER_INFO, 1,
- [Define to use Mac OS X crashreporter info])]
-)
-
-#
-# libdispatch has micro-optimized and deeply personal knowledge of Mac OS
-# implementation details. Only enable this if explicitly requested, as it
-# will lead to data corruption if applied on systems violating its
-# expectations.
+# On Mac OS X libdispatch can use the non-portable direct pthread TSD functions
#
AC_ARG_ENABLE([apple-tsd-optimizations],
[AS_HELP_STRING([--enable-apple-tsd-optimizations],
@@ -102,16 +84,8 @@
[Define to use non-portable pthread TSD optimizations for Mac OS X)])]
)
-AC_ARG_ENABLE([apple-semaphore-optimizations],
- [AS_HELP_STRING([--enable-apple-semaphore-optimizations],
- [Use non-portable semaphore optimizations for Mac OS X.])]
-)
-
-AS_IF([test "x$enable_apple_semaphore_optimizations" = "xyes"],
- [AC_DEFINE(USE_APPLE_SEMAPHORE_OPTIMIZATIONS, 1,
- [Define to use non-portablesemaphore optimizations for Mac OS X])]
-)
-
+AC_USE_SYSTEM_EXTENSIONS
+AC_PROG_CC
AC_PROG_CXX
AC_PROG_INSTALL
AC_PROG_LIBTOOL
@@ -150,7 +124,19 @@
# Checks for header files.
#
AC_HEADER_STDC
-AC_CHECK_HEADERS([TargetConditionals.h pthread_machdep.h pthread_np.h malloc/malloc.h libkern/OSCrossEndian.h libkern/OSAtomic.h sys/sysctl.h])
+AC_CHECK_HEADERS([TargetConditionals.h pthread_np.h malloc/malloc.h libkern/OSCrossEndian.h libkern/OSAtomic.h])
+
+# hack for pthread_machdep.h's #include <System/machine/cpu_capabilities.h>
+AS_IF([test -n "$apple_xnu_source_system_path"], [
+ saveCPPFLAGS="$CPPFLAGS"
+ CPPFLAGS="$CPPFLAGS -I."
+ ln -fsh "$apple_xnu_source_system_path" System
+])
+AC_CHECK_HEADERS([pthread_machdep.h])
+AS_IF([test -n "$apple_xnu_source_system_path"], [
+ rm -f System
+ CPPFLAGS="$saveCPPFLAGS"
+])
#
# Core Services is tested in one of the GCD regression tests, so test for its
@@ -166,8 +152,9 @@
# We use the availability of mach.h to decide whether to compile in all sorts
# of Machisms, including using Mach ports as event sources, etc.
#
-AC_CHECK_HEADER([mach/mach.h],
- [AC_DEFINE(HAVE_MACH, 1,Define if mach is present)
+AC_CHECK_HEADER([mach/mach.h], [
+ AC_DEFINE(HAVE_MACH, 1, [Define if mach is present])
+ AC_DEFINE(__DARWIN_NON_CANCELABLE, 1, [Define if using Darwin $NOCANCEL])
have_mach=true],
[have_mach=false]
)
@@ -178,29 +165,21 @@
# in support for pthread work queues.
#
AC_CHECK_HEADER([pthread_workqueue.h],
- [AC_DEFINE(HAVE_PTHREAD_WORKQUEUES, 1,Define if pthread work queues are present)]
+ [AC_DEFINE(HAVE_PTHREAD_WORKQUEUES, 1, [Define if pthread work queues are present])]
)
#
-# Check if libpthread_workqueue.so exists
-#
-AC_CHECK_LIB(pthread_workqueue, pthread_workqueue_init_np,
- have_lpwq=true, have_lpwq=false)
-AM_CONDITIONAL(USE_LIBPTHREAD_WORKQUEUE, $have_lpwq)
-
-#
# Find functions and declarations we care about.
#
-AC_CHECK_DECLS([CLOCK_UPTIME, CLOCK_MONOTONIC, CLOCK_REALTIME], [], [],
+AC_CHECK_DECLS([CLOCK_UPTIME, CLOCK_MONOTONIC], [], [],
[[#include <time.h>]])
-AC_CHECK_DECLS([EVFILT_LIO, EVFILT_SESSION, NOTE_NONE, NOTE_REAP, NOTE_REVOKE, NOTE_SIGNAL], [], [],
- [[#include <sys/types.h>
-#include <sys/event.h>]])
+AC_CHECK_DECLS([NOTE_NONE, NOTE_REAP, NOTE_SIGNAL], [], [],
+ [[#include <sys/event.h>]])
AC_CHECK_DECLS([FD_COPY], [], [], [[#include <sys/select.h>]])
AC_CHECK_DECLS([SIGEMT], [], [], [[#include <signal.h>]])
AC_CHECK_DECLS([VQ_UPDATE, VQ_VERYLOWDISK], [], [], [[#include <sys/mount.h>]])
AC_CHECK_DECLS([program_invocation_short_name], [], [], [[#include <errno.h>]])
-AC_CHECK_FUNCS([pthread_key_init_np pthread_main_np mach_absolute_time malloc_create_zone sysconf getprogname getexecname vasprintf asprintf arc4random fgetln])
+AC_CHECK_FUNCS([pthread_key_init_np pthread_main_np mach_absolute_time malloc_create_zone sysconf getprogname])
AC_CHECK_DECLS([POSIX_SPAWN_START_SUSPENDED],
[have_posix_spawn_start_suspended=true],
@@ -220,10 +199,10 @@
#
AC_MSG_CHECKING([what semaphore type to use]);
AS_IF([test "x$have_mach" = "xtrue"],
- [AC_DEFINE(USE_MACH_SEM, 1,[Define to use Mach semaphores])
+ [AC_DEFINE(USE_MACH_SEM, 1, [Define to use Mach semaphores])
AC_MSG_RESULT([Mach semaphores])],
[test "x$have_sem_init" = "xtrue"],
- [AC_DEFINE(USE_POSIX_SEM, 1,[Define to use POSIX semaphores])
+ [AC_DEFINE(USE_POSIX_SEM, 1, [Define to use POSIX semaphores])
AC_MSG_RESULT([POSIX semaphores])],
[AC_MSG_ERROR([no supported semaphore type])]
)
@@ -233,20 +212,55 @@
#include <sys/cdefs.h>
#endif])
-DISPATCH_C_PRIVATE_EXTERN
DISPATCH_C_BLOCKS
+AC_CACHE_CHECK([for -fvisibility=hidden], [dispatch_cv_cc_visibility_hidden], [
+ saveCFLAGS="$CFLAGS"
+ CFLAGS="$CFLAGS -fvisibility=hidden"
+ AC_LINK_IFELSE([AC_LANG_PROGRAM([
+ extern __attribute__ ((visibility ("default"))) int foo; int foo;], [foo = 0;])],
+ [dispatch_cv_cc_visibility_hidden="yes"], [dispatch_cv_cc_visibility_hidden="no"])
+ CFLAGS="$saveCFLAGS"
+])
+AS_IF([test "x$dispatch_cv_cc_visibility_hidden" != "xno"], [
+ VISIBILITY_FLAGS="-fvisibility=hidden"
+])
+AC_SUBST([VISIBILITY_FLAGS])
+
+AC_CACHE_CHECK([for -momit-leaf-frame-pointer], [dispatch_cv_cc_omit_leaf_fp], [
+ saveCFLAGS="$CFLAGS"
+ CFLAGS="$CFLAGS -momit-leaf-frame-pointer"
+ AC_LINK_IFELSE([AC_LANG_PROGRAM([
+ extern int foo(void); int foo(void) {return 1;}], [foo();])],
+ [dispatch_cv_cc_omit_leaf_fp="yes"], [dispatch_cv_cc_omit_leaf_fp="no"])
+ CFLAGS="$saveCFLAGS"
+])
+AS_IF([test "x$dispatch_cv_cc_omit_leaf_fp" != "xno"], [
+ OMIT_LEAF_FP_FLAGS="-momit-leaf-frame-pointer"
+])
+AC_SUBST([OMIT_LEAF_FP_FLAGS])
+
+AC_CACHE_CHECK([for darwin linker], [dispatch_cv_ld_darwin], [
+ saveLDFLAGS="$LDFLAGS"
+ LDFLAGS="$LDFLAGS -dynamiclib -compatibility_version 1.2.3 -current_version 4.5.6"
+ AC_LINK_IFELSE([AC_LANG_PROGRAM([
+ extern int foo; int foo;], [foo = 0;])],
+ [dispatch_cv_ld_darwin="yes"], [dispatch_cv_ld_darwin="no"])
+ LDFLAGS="$saveLDFLAGS"
+])
+AM_CONDITIONAL(HAVE_DARWIN_LD, [test "x$dispatch_cv_ld_darwin" != "xno"])
+
#
# Temporary: some versions of clang do not mark __builtin_trap() as
# __attribute__((__noreturn__)). Detect and add if required.
#
AC_COMPILE_IFELSE([
AC_LANG_PROGRAM([void __attribute__((__noreturn__)) temp(void) { __builtin_trap(); }], [])], [
- AC_DEFINE(HAVE_NORETURN_BUILTIN_TRAP, 1,[Define if __builtin_trap marked noreturn])
+ AC_DEFINE(HAVE_NORETURN_BUILTIN_TRAP, 1, [Define if __builtin_trap marked noreturn])
], [])
#
# Generate Makefiles.
#
-AC_CONFIG_FILES([Makefile dispatch/Makefile man/Makefile src/Makefile testing/Makefile])
+AC_CONFIG_FILES([Makefile dispatch/Makefile man/Makefile private/Makefile src/Makefile])
AC_OUTPUT
diff --git a/dispatch/Makefile.am b/dispatch/Makefile.am
index 994d37a..5cba713 100644
--- a/dispatch/Makefile.am
+++ b/dispatch/Makefile.am
@@ -4,10 +4,12 @@
dispatchdir=$(includedir)/dispatch
-dispatch_HEADERS= \
+dispatch_HEADERS= \
base.h \
+ data.h \
dispatch.h \
group.h \
+ io.h \
object.h \
once.h \
queue.h \
diff --git a/dispatch/base.h b/dispatch/base.h
index a302a81..029e3e0 100644
--- a/dispatch/base.h
+++ b/dispatch/base.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -47,19 +47,24 @@
struct dispatch_source_s *_ds;
struct dispatch_source_attr_s *_dsa;
struct dispatch_semaphore_s *_dsema;
+ struct dispatch_data_s *_ddata;
+ struct dispatch_io_s *_dchannel;
+ struct dispatch_operation_s *_doperation;
+ struct dispatch_disk_s *_ddisk;
} dispatch_object_t __attribute__((transparent_union));
#endif
typedef void (*dispatch_function_t)(void *);
#ifdef __cplusplus
-#define DISPATCH_DECL(name) typedef struct name##_s : public dispatch_object_s {} *name##_t
+#define DISPATCH_DECL(name) \
+ typedef struct name##_s : public dispatch_object_s {} *name##_t
#else
/*! @parseOnly */
#define DISPATCH_DECL(name) typedef struct name##_s *name##_t
#endif
-#ifdef __GNUC__
+#if __GNUC__
#define DISPATCH_NORETURN __attribute__((__noreturn__))
#define DISPATCH_NOTHROW __attribute__((__nothrow__))
#define DISPATCH_NONNULL1 __attribute__((__nonnull__(1)))
@@ -69,7 +74,7 @@
#define DISPATCH_NONNULL5 __attribute__((__nonnull__(5)))
#define DISPATCH_NONNULL6 __attribute__((__nonnull__(6)))
#define DISPATCH_NONNULL7 __attribute__((__nonnull__(7)))
-#if __clang__
+#if __clang__ && __clang_major__ < 3
// rdar://problem/6857843
#define DISPATCH_NONNULL_ALL
#else
@@ -77,9 +82,10 @@
#endif
#define DISPATCH_SENTINEL __attribute__((__sentinel__))
#define DISPATCH_PURE __attribute__((__pure__))
+#define DISPATCH_CONST __attribute__((__const__))
#define DISPATCH_WARN_RESULT __attribute__((__warn_unused_result__))
#define DISPATCH_MALLOC __attribute__((__malloc__))
-#define DISPATCH_FORMAT(...) __attribute__((__format__(__VA_ARGS__)))
+#define DISPATCH_ALWAYS_INLINE __attribute__((__always_inline__))
#else
/*! @parseOnly */
#define DISPATCH_NORETURN
@@ -106,11 +112,13 @@
/*! @parseOnly */
#define DISPATCH_PURE
/*! @parseOnly */
+#define DISPATCH_CONST
+/*! @parseOnly */
#define DISPATCH_WARN_RESULT
/*! @parseOnly */
#define DISPATCH_MALLOC
/*! @parseOnly */
-#define DISPATCH_FORMAT(...)
+#define DISPATCH_ALWAYS_INLINE
#endif
#if __GNUC__
@@ -119,4 +127,16 @@
#define DISPATCH_EXPORT extern
#endif
+#if __GNUC__
+#define DISPATCH_INLINE static __inline__
+#else
+#define DISPATCH_INLINE static inline
+#endif
+
+#if __GNUC__
+#define DISPATCH_EXPECT(x, v) __builtin_expect((x), (v))
+#else
+#define DISPATCH_EXPECT(x, v) (x)
+#endif
+
#endif
diff --git a/dispatch/data.h b/dispatch/data.h
new file mode 100644
index 0000000..2222e1b
--- /dev/null
+++ b/dispatch/data.h
@@ -0,0 +1,248 @@
+/*
+ * Copyright (c) 2009-2011 Apple Inc. All rights reserved.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_START@
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_END@
+ */
+
+#ifndef __DISPATCH_DATA__
+#define __DISPATCH_DATA__
+
+#ifndef __DISPATCH_INDIRECT__
+#error "Please #include <dispatch/dispatch.h> instead of this file directly."
+#include <dispatch/base.h> // for HeaderDoc
+#endif
+
+__BEGIN_DECLS
+
+/*! @header
+ * Dispatch data objects describe contiguous or sparse regions of memory that
+ * may be managed by the system or by the application.
+ * Dispatch data objects are immutable, any direct access to memory regions
+ * represented by dispatch objects must not modify that memory.
+ */
+
+/*!
+ * @typedef dispatch_data_t
+ * A dispatch object representing memory regions.
+ */
+DISPATCH_DECL(dispatch_data);
+
+/*!
+ * @var dispatch_data_empty
+ * @discussion The singleton dispatch data object representing a zero-length
+ * memory region.
+ */
+#define dispatch_data_empty (&_dispatch_data_empty)
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT struct dispatch_data_s _dispatch_data_empty;
+
+#ifdef __BLOCKS__
+
+/*!
+ * @const DISPATCH_DATA_DESTRUCTOR_DEFAULT
+ * @discussion The default destructor for dispatch data objects.
+ * Used at data object creation to indicate that the supplied buffer should
+ * be copied into internal storage managed by the system.
+ */
+#define DISPATCH_DATA_DESTRUCTOR_DEFAULT NULL
+
+/*!
+ * @const DISPATCH_DATA_DESTRUCTOR_FREE
+ * @discussion The destructor for dispatch data objects created from a malloc'd
+ * buffer. Used at data object creation to indicate that the supplied buffer
+ * was allocated by the malloc() family and should be destroyed with free(3).
+ */
+#define DISPATCH_DATA_DESTRUCTOR_FREE (_dispatch_data_destructor_free)
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT const dispatch_block_t _dispatch_data_destructor_free;
+
+/*!
+ * @function dispatch_data_create
+ * Creates a dispatch data object from the given contiguous buffer of memory. If
+ * a non-default destructor is provided, ownership of the buffer remains with
+ * the caller (i.e. the bytes will not be copied). The last release of the data
+ * object will result in the invocation of the specified destructor on the
+ * specified queue to free the buffer.
+ *
+ * If the DISPATCH_DATA_DESTRUCTOR_FREE destructor is provided the buffer will
+ * be freed via free(3) and the queue argument ignored.
+ *
+ * If the DISPATCH_DATA_DESTRUCTOR_DEFAULT destructor is provided, data object
+ * creation will copy the buffer into internal memory managed by the system.
+ *
+ * @param buffer A contiguous buffer of data.
+ * @param size The size of the contiguous buffer of data.
+ * @param queue The queue to which the destructor should be submitted.
+ * @param destructor The destructor responsible for freeing the data when it
+ * is no longer needed.
+ * @result A newly created dispatch data object.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_MALLOC DISPATCH_WARN_RESULT DISPATCH_NOTHROW
+dispatch_data_t
+dispatch_data_create(const void *buffer,
+ size_t size,
+ dispatch_queue_t queue,
+ dispatch_block_t destructor);
+
+/*!
+ * @function dispatch_data_get_size
+ * Returns the logical size of the memory region(s) represented by the specified
+ * dispatch data object.
+ *
+ * @param data The dispatch data object to query.
+ * @result The number of bytes represented by the data object.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_PURE DISPATCH_NONNULL1 DISPATCH_NOTHROW
+size_t
+dispatch_data_get_size(dispatch_data_t data);
+
+/*!
+ * @function dispatch_data_create_map
+ * Maps the memory represented by the specified dispatch data object as a single
+ * contiguous memory region and returns a new data object representing it.
+ * If non-NULL references to a pointer and a size variable are provided, they
+ * are filled with the location and extent of that region. These allow direct
+ * read access to the represented memory, but are only valid until the copy
+ * object is released.
+ *
+ * @param data The dispatch data object to map.
+ * @param buffer_ptr A pointer to a pointer variable to be filled with the
+ * location of the mapped contiguous memory region, or
+ * NULL.
+ * @param size_ptr A pointer to a size_t variable to be filled with the
+ * size of the mapped contiguous memory region, or NULL.
+ * @result A newly created dispatch data object.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_WARN_RESULT DISPATCH_NOTHROW
+dispatch_data_t
+dispatch_data_create_map(dispatch_data_t data,
+ const void **buffer_ptr,
+ size_t *size_ptr);
+
+/*!
+ * @function dispatch_data_create_concat
+ * Returns a new dispatch data object representing the concatenation of the
+ * specified data objects. Those objects may be released by the application
+ * after the call returns (however, the system might not deallocate the memory
+ * region(s) described by them until the newly created object has also been
+ * released).
+ *
+ * @param data1 The data object representing the region(s) of memory to place
+ * at the beginning of the newly created object.
+ * @param data2 The data object representing the region(s) of memory to place
+ * at the end of the newly created object.
+ * @result A newly created object representing the concatenation of the
+ * data1 and data2 objects.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_WARN_RESULT DISPATCH_NOTHROW
+dispatch_data_t
+dispatch_data_create_concat(dispatch_data_t data1, dispatch_data_t data2);
+
+/*!
+ * @function dispatch_data_create_subrange
+ * Returns a new dispatch data object representing a subrange of the specified
+ * data object, which may be released by the application after the call returns
+ * (however, the system might not deallocate the memory region(s) described by
+ * that object until the newly created object has also been released).
+ *
+ * @param data The data object representing the region(s) of memory to
+ * create a subrange of.
+ * @param offset The offset into the data object where the subrange
+ * starts.
+ * @param length The length of the range.
+ * @result A newly created object representing the specified
+ * subrange of the data object.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_WARN_RESULT DISPATCH_NOTHROW
+dispatch_data_t
+dispatch_data_create_subrange(dispatch_data_t data,
+ size_t offset,
+ size_t length);
+
+/*!
+ * @typedef dispatch_data_applier_t
+ * A block to be invoked for every contiguous memory region in a data object.
+ *
+ * @param region A data object representing the current region.
+ * @param offset The logical offset of the current region to the start
+ * of the data object.
+ * @param buffer The location of the memory for the current region.
+ * @param size The size of the memory for the current region.
+ * @result A Boolean indicating whether traversal should continue.
+ */
+typedef bool (^dispatch_data_applier_t)(dispatch_data_t region,
+ size_t offset,
+ const void *buffer,
+ size_t size);
+
+/*!
+ * @function dispatch_data_apply
+ * Traverse the memory regions represented by the specified dispatch data object
+ * in logical order and invoke the specified block once for every contiguous
+ * memory region encountered.
+ *
+ * Each invocation of the block is passed a data object representing the current
+ * region and its logical offset, along with the memory location and extent of
+ * the region. These allow direct read access to the memory region, but are only
+ * valid until the passed-in region object is released. Note that the region
+ * object is released by the system when the block returns, it is the
+ * responsibility of the application to retain it if the region object or the
+ * associated memory location are needed after the block returns.
+ *
+ * @param data The data object to traverse.
+ * @param applier The block to be invoked for every contiguous memory
+ * region in the data object.
+ * @result A Boolean indicating whether traversal completed
+ * successfully.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
+bool
+dispatch_data_apply(dispatch_data_t data, dispatch_data_applier_t applier);
+
+/*!
+ * @function dispatch_data_copy_region
+ * Finds the contiguous memory region containing the specified location among
+ * the regions represented by the specified object and returns a copy of the
+ * internal dispatch data object representing that region along with its logical
+ * offset in the specified object.
+ *
+ * @param data The dispatch data object to query.
+ * @param location The logical position in the data object to query.
+ * @param offset_ptr A pointer to a size_t variable to be filled with the
+ * logical offset of the returned region object to the
+ * start of the queried data object.
+ * @result A newly created dispatch data object.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL3 DISPATCH_WARN_RESULT
+DISPATCH_NOTHROW
+dispatch_data_t
+dispatch_data_copy_region(dispatch_data_t data,
+ size_t location,
+ size_t *offset_ptr);
+
+#endif /* __BLOCKS__ */
+
+__END_DECLS
+
+#endif /* __DISPATCH_DATA__ */
diff --git a/dispatch/dispatch.h b/dispatch/dispatch.h
index b9cee61..2ba2cce 100644
--- a/dispatch/dispatch.h
+++ b/dispatch/dispatch.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -25,30 +25,18 @@
#include <Availability.h>
#include <TargetConditionals.h>
#endif
-#if HAVE_SYS_CDEFS_H
#include <sys/cdefs.h>
-#endif
#include <stddef.h>
#include <stdint.h>
#include <stdbool.h>
#include <stdarg.h>
-#if HAVE_UNISTD_H
#include <unistd.h>
-#endif
-
-#if defined(__cplusplus)
-#define __DISPATCH_BEGIN_DECLS extern "C" {
-#define __DISPATCH_END_DECLS }
-#else
-#define __DISPATCH_BEGIN_DECLS
-#define __DISPATCH_END_DECLS
-#endif
#ifndef __OSX_AVAILABLE_STARTING
-#define __OSX_AVAILABLE_STARTING(x, y)
+#define __OSX_AVAILABLE_STARTING(x, y)
#endif
-#define DISPATCH_API_VERSION 20090501
+#define DISPATCH_API_VERSION 20110201
#ifndef __DISPATCH_BUILDING_DISPATCH__
@@ -64,6 +52,8 @@
#include <dispatch/group.h>
#include <dispatch/semaphore.h>
#include <dispatch/once.h>
+#include <dispatch/data.h>
+#include <dispatch/io.h>
#undef __DISPATCH_INDIRECT__
diff --git a/dispatch/group.h b/dispatch/group.h
index ce03a5a..4e6e11d 100644
--- a/dispatch/group.h
+++ b/dispatch/group.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -33,14 +33,14 @@
*/
DISPATCH_DECL(dispatch_group);
-__DISPATCH_BEGIN_DECLS
+__BEGIN_DECLS
/*!
* @function dispatch_group_create
*
* @abstract
* Creates new group with which blocks may be associated.
- *
+ *
* @discussion
* This function creates a new group with which blocks may be associated.
* The dispatch group may be used to wait for the completion of the blocks it
@@ -50,7 +50,7 @@
* The newly created group, or NULL on failure.
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_WARN_RESULT
+DISPATCH_EXPORT DISPATCH_MALLOC DISPATCH_WARN_RESULT DISPATCH_NOTHROW
dispatch_group_t
dispatch_group_create(void);
@@ -79,7 +79,7 @@
*/
#ifdef __BLOCKS__
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL_ALL
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
dispatch_group_async(dispatch_group_t group,
dispatch_queue_t queue,
@@ -113,7 +113,7 @@
* dispatch_group_async_f().
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL2 DISPATCH_NONNULL4
+DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL2 DISPATCH_NONNULL4 DISPATCH_NOTHROW
void
dispatch_group_async_f(dispatch_group_t group,
dispatch_queue_t queue,
@@ -124,11 +124,11 @@
* @function dispatch_group_wait
*
* @abstract
- * Wait synchronously for the previously submitted blocks to complete;
- * returns if the blocks have not completed within the specified timeout.
+ * Wait synchronously until all the blocks associated with a group have
+ * completed or until the specified timeout has elapsed.
*
* @discussion
- * This function waits for the completion of the blocks associated with the
+ * This function waits for the completion of the blocks associated with the
* given dispatch group, and returns after all blocks have completed or when
* the specified timeout has elapsed. When a timeout occurs, the group is
* restored to its original state.
@@ -156,7 +156,7 @@
* within the specified timeout) or non-zero on error (i.e. timed out).
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL_ALL
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
long
dispatch_group_wait(dispatch_group_t group, dispatch_time_t timeout);
@@ -164,8 +164,8 @@
* @function dispatch_group_notify
*
* @abstract
- * Schedule a block to be submitted to a queue when a group of previously
- * submitted blocks have completed.
+ * Schedule a block to be submitted to a queue when all the blocks associated
+ * with a group have completed.
*
* @discussion
* This function schedules a notification block to be submitted to the specified
@@ -173,7 +173,7 @@
*
* If no blocks are associated with the dispatch group (i.e. the group is empty)
* then the notification block will be submitted immediately.
- *
+ *
* The group will be empty at the time the notification block is submitted to
* the target queue. The group may either be released with dispatch_release()
* or reused for additional operations.
@@ -192,7 +192,7 @@
*/
#ifdef __BLOCKS__
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL_ALL
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
dispatch_group_notify(dispatch_group_t group,
dispatch_queue_t queue,
@@ -203,8 +203,8 @@
* @function dispatch_group_notify_f
*
* @abstract
- * Schedule a function to be submitted to a queue when a group of previously
- * submitted functions have completed.
+ * Schedule a function to be submitted to a queue when all the blocks
+ * associated with a group have completed.
*
* @discussion
* See dispatch_group_notify() for details.
@@ -223,6 +223,7 @@
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL2 DISPATCH_NONNULL4
+DISPATCH_NOTHROW
void
dispatch_group_notify_f(dispatch_group_t group,
dispatch_queue_t queue,
@@ -245,7 +246,7 @@
* The result of passing NULL in this parameter is undefined.
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NOTHROW DISPATCH_NONNULL_ALL
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
dispatch_group_enter(dispatch_group_t group);
@@ -264,10 +265,10 @@
* The result of passing NULL in this parameter is undefined.
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NOTHROW DISPATCH_NONNULL_ALL
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
dispatch_group_leave(dispatch_group_t group);
-__DISPATCH_END_DECLS
+__END_DECLS
#endif
diff --git a/dispatch/io.h b/dispatch/io.h
new file mode 100644
index 0000000..f8fb2ff
--- /dev/null
+++ b/dispatch/io.h
@@ -0,0 +1,586 @@
+/*
+ * Copyright (c) 2009-2010 Apple Inc. All rights reserved.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_START@
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_END@
+ */
+
+#ifndef __DISPATCH_IO__
+#define __DISPATCH_IO__
+
+#ifndef __DISPATCH_INDIRECT__
+#error "Please #include <dispatch/dispatch.h> instead of this file directly."
+#include <dispatch/base.h> // for HeaderDoc
+#endif
+
+__BEGIN_DECLS
+
+/*! @header
+ * Dispatch I/O provides both stream and random access asynchronous read and
+ * write operations on file descriptors. One or more dispatch I/O channels may
+ * be created from a file descriptor as either the DISPATCH_IO_STREAM type or
+ * DISPATCH_IO_RANDOM type. Once a channel has been created the application may
+ * schedule asynchronous read and write operations.
+ *
+ * The application may set policies on the dispatch I/O channel to indicate the
+ * desired frequency of I/O handlers for long-running operations.
+ *
+ * Dispatch I/O also provides a memory managment model for I/O buffers that
+ * avoids unnecessary copying of data when pipelined between channels. Dispatch
+ * I/O monitors the overall memory pressure and I/O access patterns for the
+ * application to optimize resource utilization.
+ */
+
+/*!
+ * @typedef dispatch_fd_t
+ * Native file descriptor type for the platform.
+ */
+typedef int dispatch_fd_t;
+
+#ifdef __BLOCKS__
+
+/*!
+ * @functiongroup Dispatch I/O Convenience API
+ * Convenience wrappers around the dispatch I/O channel API, with simpler
+ * callback handler semantics and no explicit management of channel objects.
+ * File descriptors passed to the convenience API are treated as streams, and
+ * scheduling multiple operations on one file descriptor via the convenience API
+ * may incur more overhead than by using the dispatch I/O channel API directly.
+ */
+
+/*!
+ * @function dispatch_read
+ * Schedule a read operation for asynchronous execution on the specified file
+ * descriptor. The specified handler is enqueued with the data read from the
+ * file descriptor when the operation has completed or an error occurs.
+ *
+ * The data object passed to the handler will be automatically released by the
+ * system when the handler returns. It is the responsibility of the application
+ * to retain, concatenate or copy the data object if it is needed after the
+ * handler returns.
+ *
+ * The data object passed to the handler will only contain as much data as is
+ * currently available from the file descriptor (up to the specified length).
+ *
+ * If an unrecoverable error occurs on the file descriptor, the handler will be
+ * enqueued with the appropriate error code along with a data object of any data
+ * that could be read successfully.
+ *
+ * An invocation of the handler with an error code of zero and an empty data
+ * object indicates that EOF was reached.
+ *
+ * The system takes control of the file descriptor until the handler is
+ * enqueued, and during this time file descriptor flags such as O_NONBLOCK will
+ * be modified by the system on behalf of the application. It is an error for
+ * the application to modify a file descriptor directly while it is under the
+ * control of the system, but it may create additional dispatch I/O convenience
+ * operations or dispatch I/O channels associated with that file descriptor.
+ *
+ * @param fd The file descriptor from which to read the data.
+ * @param length The length of data to read from the file descriptor,
+ * or SIZE_MAX to indicate that all of the data currently
+ * available from the file descriptor should be read.
+ * @param queue The dispatch queue to which the handler should be
+ * submitted.
+ * @param handler The handler to enqueue when data is ready to be
+ * delivered.
+ * @param data The data read from the file descriptor.
+ * @param error An errno condition for the read operation or
+ * zero if the read was successful.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL3 DISPATCH_NONNULL4 DISPATCH_NOTHROW
+void
+dispatch_read(dispatch_fd_t fd,
+ size_t length,
+ dispatch_queue_t queue,
+ void (^handler)(dispatch_data_t data, int error));
+
+/*!
+ * @function dispatch_write
+ * Schedule a write operation for asynchronous execution on the specified file
+ * descriptor. The specified handler is enqueued when the operation has
+ * completed or an error occurs.
+ *
+ * If an unrecoverable error occurs on the file descriptor, the handler will be
+ * enqueued with the appropriate error code along with the data that could not
+ * be successfully written.
+ *
+ * An invocation of the handler with an error code of zero indicates that the
+ * data was fully written to the channel.
+ *
+ * The system takes control of the file descriptor until the handler is
+ * enqueued, and during this time file descriptor flags such as O_NONBLOCK will
+ * be modified by the system on behalf of the application. It is an error for
+ * the application to modify a file descriptor directly while it is under the
+ * control of the system, but it may create additional dispatch I/O convenience
+ * operations or dispatch I/O channels associated with that file descriptor.
+ *
+ * @param fd The file descriptor to which to write the data.
+ * @param data The data object to write to the file descriptor.
+ * @param queue The dispatch queue to which the handler should be
+ * submitted.
+ * @param handler The handler to enqueue when the data has been written.
+ * @param data The data that could not be written to the I/O
+ * channel, or NULL.
+ * @param error An errno condition for the write operation or
+ * zero if the write was successful.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL2 DISPATCH_NONNULL3 DISPATCH_NONNULL4
+DISPATCH_NOTHROW
+void
+dispatch_write(dispatch_fd_t fd,
+ dispatch_data_t data,
+ dispatch_queue_t queue,
+ void (^handler)(dispatch_data_t data, int error));
+
+/*!
+ * @functiongroup Dispatch I/O Channel API
+ */
+
+/*!
+ * @typedef dispatch_io_t
+ * A dispatch I/O channel represents the asynchronous I/O policy applied to a
+ * file descriptor. I/O channels are first class dispatch objects and may be
+ * retained and released, suspended and resumed, etc.
+ */
+DISPATCH_DECL(dispatch_io);
+
+/*!
+ * @typedef dispatch_io_handler_t
+ * The prototype of I/O handler blocks for dispatch I/O operations.
+ *
+ * @param done A flag indicating whether the operation is complete.
+ * @param data The data object to be handled.
+ * @param error An errno condition for the operation.
+ */
+typedef void (^dispatch_io_handler_t)(bool done, dispatch_data_t data,
+ int error);
+
+/*!
+ * @typedef dispatch_io_type_t
+ * The type of a dispatch I/O channel:
+ *
+ * @const DISPATCH_IO_STREAM A dispatch I/O channel representing a stream of
+ * bytes. Read and write operations on a channel of this type are performed
+ * serially (in order of creation) and read/write data at the file pointer
+ * position that is current at the time the operation starts executing.
+ * Operations of different type (read vs. write) may be perfomed simultaneously.
+ * Offsets passed to operations on a channel of this type are ignored.
+ *
+ * @const DISPATCH_IO_RANDOM A dispatch I/O channel representing a random
+ * access file. Read and write operations on a channel of this type may be
+ * performed concurrently and read/write data at the specified offset. Offsets
+ * are interpreted relative to the file pointer position current at the time the
+ * I/O channel is created. Attempting to create a channel of this type for a
+ * file descriptor that is not seekable will result in an error.
+ */
+#define DISPATCH_IO_STREAM 0
+#define DISPATCH_IO_RANDOM 1
+
+typedef unsigned long dispatch_io_type_t;
+
+/*!
+ * @function dispatch_io_create
+ * Create a dispatch I/O channel associated with a file descriptor. The system
+ * takes control of the file descriptor until the channel is closed, an error
+ * occurs on the file descriptor or all references to the channel are released.
+ * At that time the specified cleanup handler will be enqueued and control over
+ * the file descriptor relinquished.
+ *
+ * While a file descriptor is under the control of a dispatch I/O channel, file
+ * descriptor flags such as O_NONBLOCK will be modified by the system on behalf
+ * of the application. It is an error for the application to modify a file
+ * descriptor directly while it is under the control of a dispatch I/O channel,
+ * but it may create additional channels associated with that file descriptor.
+ *
+ * @param type The desired type of I/O channel (DISPATCH_IO_STREAM
+ * or DISPATCH_IO_RANDOM).
+ * @param fd The file descriptor to associate with the I/O channel.
+ * @param queue The dispatch queue to which the handler should be submitted.
+ * @param cleanup_handler The handler to enqueue when the system
+ * relinquishes control over the file descriptor.
+ * @param error An errno condition if control is relinquished
+ * because channel creation failed, zero otherwise.
+ * @result The newly created dispatch I/O channel or NULL if an error
+ * occurred.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_MALLOC DISPATCH_WARN_RESULT DISPATCH_NOTHROW
+dispatch_io_t
+dispatch_io_create(dispatch_io_type_t type,
+ dispatch_fd_t fd,
+ dispatch_queue_t queue,
+ void (^cleanup_handler)(int error));
+
+/*!
+* @function dispatch_io_create_with_path
+* Create a dispatch I/O channel associated with a path name. The specified
+* path, oflag and mode parameters will be passed to open(2) when the first I/O
+* operation on the channel is ready to execute and the resulting file
+* descriptor will remain open and under the control of the system until the
+* channel is closed, an error occurs on the file descriptor or all references
+* to the channel are released. At that time the file descriptor will be closed
+* and the specified cleanup handler will be enqueued.
+*
+* @param type The desired type of I/O channel (DISPATCH_IO_STREAM
+* or DISPATCH_IO_RANDOM).
+* @param path The path to associate with the I/O channel.
+* @param oflag The flags to pass to open(2) when opening the file at
+* path.
+* @param mode The mode to pass to open(2) when creating the file at
+* path (i.e. with flag O_CREAT), zero otherwise.
+* @param queue The dispatch queue to which the handler should be
+* submitted.
+* @param cleanup_handler The handler to enqueue when the system
+* has closed the file at path.
+* @param error An errno condition if control is relinquished
+* because channel creation or opening of the
+* specified file failed, zero otherwise.
+* @result The newly created dispatch I/O channel or NULL if an error
+* occurred.
+*/
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_MALLOC DISPATCH_NONNULL2 DISPATCH_WARN_RESULT
+DISPATCH_NOTHROW
+dispatch_io_t
+dispatch_io_create_with_path(dispatch_io_type_t type,
+ const char *path, int oflag, mode_t mode,
+ dispatch_queue_t queue,
+ void (^cleanup_handler)(int error));
+
+/*!
+ * @function dispatch_io_create_with_io
+ * Create a new dispatch I/O channel from an existing dispatch I/O channel.
+ * The new channel inherits the file descriptor or path name associated with
+ * the existing channel, but not its channel type or policies.
+ *
+ * If the existing channel is associated with a file descriptor, control by the
+ * system over that file descriptor is extended until the new channel is also
+ * closed, an error occurs on the file descriptor, or all references to both
+ * channels are released. At that time the specified cleanup handler will be
+ * enqueued and control over the file descriptor relinquished.
+ *
+ * While a file descriptor is under the control of a dispatch I/O channel, file
+ * descriptor flags such as O_NONBLOCK will be modified by the system on behalf
+ * of the application. It is an error for the application to modify a file
+ * descriptor directly while it is under the control of a dispatch I/O channel,
+ * but it may create additional channels associated with that file descriptor.
+ *
+ * @param type The desired type of I/O channel (DISPATCH_IO_STREAM
+ * or DISPATCH_IO_RANDOM).
+ * @param io The existing channel to create the new I/O channel from.
+ * @param queue The dispatch queue to which the handler should be submitted.
+ * @param cleanup_handler The handler to enqueue when the system
+ * relinquishes control over the file descriptor
+ * (resp. closes the file at path) associated with
+ * the existing channel.
+ * @param error An errno condition if control is relinquished
+ * because channel creation failed, zero otherwise.
+ * @result The newly created dispatch I/O channel or NULL if an error
+ * occurred.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL2 DISPATCH_MALLOC DISPATCH_WARN_RESULT
+DISPATCH_NOTHROW
+dispatch_io_t
+dispatch_io_create_with_io(dispatch_io_type_t type,
+ dispatch_io_t io,
+ dispatch_queue_t queue,
+ void (^cleanup_handler)(int error));
+
+/*!
+ * @function dispatch_io_read
+ * Schedule a read operation for asynchronous execution on the specified I/O
+ * channel. The I/O handler is enqueued one or more times depending on the
+ * general load of the system and the policy specified on the I/O channel.
+ *
+ * Any data read from the channel is described by the dispatch data object
+ * passed to the I/O handler. This object will be automatically released by the
+ * system when the I/O handler returns. It is the responsibility of the
+ * application to retain, concatenate or copy the data object if it is needed
+ * after the I/O handler returns.
+ *
+ * Dispatch I/O handlers are not reentrant. The system will ensure that no new
+ * I/O handler instance is invoked until the previously enqueued handler block
+ * has returned.
+ *
+ * An invocation of the I/O handler with the done flag set indicates that the
+ * read operation is complete and that the handler will not be enqueued again.
+ *
+ * If an unrecoverable error occurs on the I/O channel's underlying file
+ * descriptor, the I/O handler will be enqueued with the done flag set, the
+ * appropriate error code and a NULL data object.
+ *
+ * An invocation of the I/O handler with the done flag set, an error code of
+ * zero and an empty data object indicates that EOF was reached.
+ *
+ * @param channel The dispatch I/O channel from which to read the data.
+ * @param offset The offset relative to the channel position from which
+ * to start reading (only for DISPATCH_IO_RANDOM).
+ * @param length The length of data to read from the I/O channel, or
+ * SIZE_MAX to indicate that data should be read until EOF
+ * is reached.
+ * @param queue The dispatch queue to which the I/O handler should be
+ * submitted.
+ * @param io_handler The I/O handler to enqueue when data is ready to be
+ * delivered.
+ * @param done A flag indicating whether the operation is complete.
+ * @param data An object with the data most recently read from the
+ * I/O channel as part of this read operation, or NULL.
+ * @param error An errno condition for the read operation or zero if
+ * the read was successful.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL4 DISPATCH_NONNULL5
+DISPATCH_NOTHROW
+void
+dispatch_io_read(dispatch_io_t channel,
+ off_t offset,
+ size_t length,
+ dispatch_queue_t queue,
+ dispatch_io_handler_t io_handler);
+
+/*!
+ * @function dispatch_io_write
+ * Schedule a write operation for asynchronous execution on the specified I/O
+ * channel. The I/O handler is enqueued one or more times depending on the
+ * general load of the system and the policy specified on the I/O channel.
+ *
+ * Any data remaining to be written to the I/O channel is described by the
+ * dispatch data object passed to the I/O handler. This object will be
+ * automatically released by the system when the I/O handler returns. It is the
+ * responsibility of the application to retain, concatenate or copy the data
+ * object if it is needed after the I/O handler returns.
+ *
+ * Dispatch I/O handlers are not reentrant. The system will ensure that no new
+ * I/O handler instance is invoked until the previously enqueued handler block
+ * has returned.
+ *
+ * An invocation of the I/O handler with the done flag set indicates that the
+ * write operation is complete and that the handler will not be enqueued again.
+ *
+ * If an unrecoverable error occurs on the I/O channel's underlying file
+ * descriptor, the I/O handler will be enqueued with the done flag set, the
+ * appropriate error code and an object containing the data that could not be
+ * written.
+ *
+ * An invocation of the I/O handler with the done flag set and an error code of
+ * zero indicates that the data was fully written to the channel.
+ *
+ * @param channel The dispatch I/O channel on which to write the data.
+ * @param offset The offset relative to the channel position from which
+ * to start writing (only for DISPATCH_IO_RANDOM).
+ * @param data The data to write to the I/O channel. The data object
+ * will be retained by the system until the write operation
+ * is complete.
+ * @param queue The dispatch queue to which the I/O handler should be
+ * submitted.
+ * @param io_handler The I/O handler to enqueue when data has been delivered.
+ * @param done A flag indicating whether the operation is complete.
+ * @param data An object of the data remaining to be
+ * written to the I/O channel as part of this write
+ * operation, or NULL.
+ * @param error An errno condition for the write operation or zero
+ * if the write was successful.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL3 DISPATCH_NONNULL4
+DISPATCH_NONNULL5 DISPATCH_NOTHROW
+void
+dispatch_io_write(dispatch_io_t channel,
+ off_t offset,
+ dispatch_data_t data,
+ dispatch_queue_t queue,
+ dispatch_io_handler_t io_handler);
+
+/*!
+ * @typedef dispatch_io_close_flags_t
+ * The type of flags you can set on a dispatch_io_close() call
+ *
+ * @const DISPATCH_IO_STOP Stop outstanding operations on a channel when
+ * the channel is closed.
+ */
+#define DISPATCH_IO_STOP 0x1
+
+typedef unsigned long dispatch_io_close_flags_t;
+
+/*!
+ * @function dispatch_io_close
+ * Close the specified I/O channel to new read or write operations; scheduling
+ * operations on a closed channel results in their handler returning an error.
+ *
+ * If the DISPATCH_IO_STOP flag is provided, the system will make a best effort
+ * to interrupt any outstanding read and write operations on the I/O channel,
+ * otherwise those operations will run to completion normally.
+ * Partial results of read and write operations may be returned even after a
+ * channel is closed with the DISPATCH_IO_STOP flag.
+ * The final invocation of an I/O handler of an interrupted operation will be
+ * passed an ECANCELED error code, as will the I/O handler of an operation
+ * scheduled on a closed channel.
+ *
+ * @param channel The dispatch I/O channel to close.
+ * @param flags The flags for the close operation.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NOTHROW
+void
+dispatch_io_close(dispatch_io_t channel, dispatch_io_close_flags_t flags);
+
+/*!
+ * @function dispatch_io_barrier
+ * Schedule a barrier operation on the specified I/O channel; all previously
+ * scheduled operations on the channel will complete before the provided
+ * barrier block is enqueued onto the global queue determined by the channel's
+ * target queue, and no subsequently scheduled operations will start until the
+ * barrier block has returned.
+ *
+ * If multiple channels are associated with the same file descriptor, a barrier
+ * operation scheduled on any of these channels will act as a barrier across all
+ * channels in question, i.e. all previously scheduled operations on any of the
+ * channels will complete before the barrier block is enqueued, and no
+ * operations subsequently scheduled on any of the channels will start until the
+ * barrier block has returned.
+ *
+ * While the barrier block is running, it may safely operate on the channel's
+ * underlying file descriptor with fsync(2), lseek(2) etc. (but not close(2)).
+ *
+ * @param channel The dispatch I/O channel to close.
+ * @param barrier The flags for the close operation.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
+void
+dispatch_io_barrier(dispatch_io_t channel, dispatch_block_t barrier);
+
+/*!
+ * @function dispatch_io_get_descriptor
+ * Returns the file descriptor underlying a dispatch I/O channel.
+ *
+ * Will return -1 for a channel closed with dispatch_io_close() and for a
+ * channel associated with a path name that has not yet been open(2)ed.
+ *
+ * If called from a barrier block scheduled on a channel associated with a path
+ * name that has not yet been open(2)ed, this will trigger the channel open(2)
+ * operation and return the resulting file descriptor.
+ *
+ * @param channel The dispatch I/O channel to query.
+ * @result The file descriptor underlying the channel, or -1.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_WARN_RESULT DISPATCH_NOTHROW
+dispatch_fd_t
+dispatch_io_get_descriptor(dispatch_io_t channel);
+
+/*!
+ * @function dispatch_io_set_high_water
+ * Set a high water mark on the I/O channel for all operations.
+ *
+ * The system will make a best effort to enqueue I/O handlers with partial
+ * results as soon the number of bytes processed by an operation (i.e. read or
+ * written) reaches the high water mark.
+ *
+ * The size of data objects passed to I/O handlers for this channel will never
+ * exceed the specified high water mark.
+ *
+ * The default value for the high water mark is unlimited (i.e. SIZE_MAX).
+ *
+ * @param channel The dispatch I/O channel on which to set the policy.
+ * @param high_water The number of bytes to use as a high water mark.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NOTHROW
+void
+dispatch_io_set_high_water(dispatch_io_t channel, size_t high_water);
+
+/*!
+ * @function dispatch_io_set_low_water
+ * Set a low water mark on the I/O channel for all operations.
+ *
+ * The system will process (i.e. read or write) at least the low water mark
+ * number of bytes for an operation before enqueueing I/O handlers with partial
+ * results.
+ *
+ * The size of data objects passed to intermediate I/O handler invocations for
+ * this channel (i.e. excluding the final invocation) will never be smaller than
+ * the specified low water mark, except if the channel has an interval with the
+ * DISPATCH_IO_STRICT_INTERVAL flag set or if EOF or an error was encountered.
+ *
+ * I/O handlers should be prepared to receive amounts of data significantly
+ * larger than the low water mark in general. If an I/O handler requires
+ * intermediate results of fixed size, set both the low and and the high water
+ * mark to that size.
+ *
+ * The default value for the low water mark is unspecified, but must be assumed
+ * to be such that intermediate handler invocations may occur.
+ * If I/O handler invocations with partial results are not desired, set the
+ * low water mark to SIZE_MAX.
+ *
+ * @param channel The dispatch I/O channel on which to set the policy.
+ * @param low_water The number of bytes to use as a low water mark.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NOTHROW
+void
+dispatch_io_set_low_water(dispatch_io_t channel, size_t low_water);
+
+/*!
+ * @typedef dispatch_io_interval_flags_t
+ * Type of flags to set on dispatch_io_set_interval()
+ *
+ * @const DISPATCH_IO_STRICT_INTERVAL Enqueue I/O handlers at a channel's
+ * interval setting even if the amount of data ready to be delivered is inferior
+ * to the low water mark (or zero).
+ */
+#define DISPATCH_IO_STRICT_INTERVAL 0x1
+
+typedef unsigned long dispatch_io_interval_flags_t;
+
+/*!
+ * @function dispatch_io_set_interval
+ * Set a nanosecond interval at which I/O handlers are to be enqueued on the
+ * I/O channel for all operations.
+ *
+ * This allows an application to receive periodic feedback on the progress of
+ * read and write operations, e.g. for the purposes of displaying progress bars.
+ *
+ * If the amount of data ready to be delivered to an I/O handler at the interval
+ * is inferior to the channel low water mark, the handler will only be enqueued
+ * if the DISPATCH_IO_STRICT_INTERVAL flag is set.
+ *
+ * Note that the system may defer enqueueing interval I/O handlers by a small
+ * unspecified amount of leeway in order to align with other system activity for
+ * improved system performance or power consumption.
+ *
+ * @param channel The dispatch I/O channel on which to set the policy.
+ * @param interval The interval in nanoseconds at which delivery of the I/O
+ * handler is desired.
+ * @param flags Flags indicating desired data delivery behavior at
+ * interval time.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NOTHROW
+void
+dispatch_io_set_interval(dispatch_io_t channel,
+ uint64_t interval,
+ dispatch_io_interval_flags_t flags);
+
+#endif /* __BLOCKS__ */
+
+__END_DECLS
+
+#endif /* __DISPATCH_IO__ */
diff --git a/dispatch/object.h b/dispatch/object.h
index 86ea159..2ecf251 100644
--- a/dispatch/object.h
+++ b/dispatch/object.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2010 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -26,7 +26,7 @@
#include <dispatch/base.h> // for HeaderDoc
#endif
-__DISPATCH_BEGIN_DECLS
+__BEGIN_DECLS
/*!
* @function dispatch_debug
@@ -34,6 +34,13 @@
* @abstract
* Programmatically log debug information about a dispatch object.
*
+ * @discussion
+ * Programmatically log debug information about a dispatch object. By default,
+ * the log output is sent to syslog at notice level. In the debug version of
+ * the library, the log output is sent to a file in /var/tmp.
+ * The log output destination can be configured via the LIBDISPATCH_LOG
+ * environment variable, valid values are: YES, NO, syslog, stderr, file.
+ *
* @param object
* The object to introspect.
*
@@ -41,12 +48,14 @@
* The message to log above and beyond the introspection.
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL2 DISPATCH_NOTHROW DISPATCH_FORMAT(printf,2,3)
+DISPATCH_EXPORT DISPATCH_NONNULL2 DISPATCH_NOTHROW
+__attribute__((__format__(printf,2,3)))
void
dispatch_debug(dispatch_object_t object, const char *message, ...);
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL2 DISPATCH_NOTHROW DISPATCH_FORMAT(printf,2,0)
+DISPATCH_EXPORT DISPATCH_NONNULL2 DISPATCH_NOTHROW
+__attribute__((__format__(printf,2,0)))
void
dispatch_debugv(dispatch_object_t object, const char *message, va_list ap);
@@ -103,7 +112,8 @@
* The context of the object; may be NULL.
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_PURE DISPATCH_WARN_RESULT DISPATCH_NOTHROW
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_PURE DISPATCH_WARN_RESULT
+DISPATCH_NOTHROW
void *
dispatch_get_context(dispatch_object_t object);
@@ -166,7 +176,7 @@
* Calls to dispatch_suspend() must be balanced with calls
* to dispatch_resume().
*
- * @param object
+ * @param object
* The object to be suspended.
* The result of passing NULL in this parameter is undefined.
*/
@@ -181,7 +191,7 @@
* @abstract
* Resumes the invocation of blocks on a dispatch object.
*
- * @param object
+ * @param object
* The object to be resumed.
* The result of passing NULL in this parameter is undefined.
*/
@@ -190,6 +200,6 @@
void
dispatch_resume(dispatch_object_t object);
-__DISPATCH_END_DECLS
+__END_DECLS
#endif
diff --git a/dispatch/once.h b/dispatch/once.h
index a7a962c..32cf2e8 100644
--- a/dispatch/once.h
+++ b/dispatch/once.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2010 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -26,7 +26,7 @@
#include <dispatch/base.h> // for HeaderDoc
#endif
-__DISPATCH_BEGIN_DECLS
+__BEGIN_DECLS
/*!
* @typedef dispatch_once_t
@@ -59,19 +59,38 @@
DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
dispatch_once(dispatch_once_t *predicate, dispatch_block_t block);
-#ifdef __GNUC__
-#define dispatch_once(x, ...) do { if (__builtin_expect(*(x), ~0l) != ~0l) dispatch_once((x), (__VA_ARGS__)); } while (0)
-#endif
+
+DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
+void
+_dispatch_once(dispatch_once_t *predicate, dispatch_block_t block)
+{
+ if (DISPATCH_EXPECT(*predicate, ~0l) != ~0l) {
+ dispatch_once(predicate, block);
+ }
+}
+#undef dispatch_once
+#define dispatch_once _dispatch_once
#endif
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL3 DISPATCH_NOTHROW
void
-dispatch_once_f(dispatch_once_t *predicate, void *context, void (*function)(void *));
-#ifdef __GNUC__
-#define dispatch_once_f(x, y, z) do { if (__builtin_expect(*(x), ~0l) != ~0l) dispatch_once_f((x), (y), (z)); } while (0)
-#endif
+dispatch_once_f(dispatch_once_t *predicate, void *context,
+ dispatch_function_t function);
-__DISPATCH_END_DECLS
+DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_NONNULL1 DISPATCH_NONNULL3
+DISPATCH_NOTHROW
+void
+_dispatch_once_f(dispatch_once_t *predicate, void *context,
+ dispatch_function_t function)
+{
+ if (DISPATCH_EXPECT(*predicate, ~0l) != ~0l) {
+ dispatch_once_f(predicate, context, function);
+ }
+}
+#undef dispatch_once_f
+#define dispatch_once_f _dispatch_once_f
+
+__END_DECLS
#endif
diff --git a/dispatch/queue.h b/dispatch/queue.h
index cd51143..d767771 100644
--- a/dispatch/queue.h
+++ b/dispatch/queue.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -30,7 +30,7 @@
* @header
*
* Dispatch is an abstract model for expressing concurrency via simple but
- * powerful API.
+ * powerful API.
*
* At the core, dispatch provides serial FIFO queues to which blocks may be
* submitted. Blocks submitted to these dispatch queues are invoked on a pool
@@ -70,7 +70,7 @@
* @typedef dispatch_queue_attr_t
*
* @abstract
- * Attribute and policy extensions for dispatch queues.
+ * Attribute for dispatch queues.
*/
DISPATCH_DECL(dispatch_queue_attr);
@@ -111,7 +111,7 @@
typedef void (^dispatch_block_t)(void);
#endif
-__DISPATCH_BEGIN_DECLS
+__BEGIN_DECLS
/*!
* @function dispatch_async
@@ -128,8 +128,8 @@
*
* The target queue determines whether the block will be invoked serially or
* concurrently with respect to other blocks submitted to that same queue.
- * Serial queues are processed concurrently with with respect to each other.
- *
+ * Serial queues are processed concurrently with respect to each other.
+ *
* @param queue
* The target dispatch queue to which the block is submitted.
* The system will hold a reference on the target queue until the block
@@ -156,7 +156,7 @@
*
* @discussion
* See dispatch_async() for details.
- *
+ *
* @param queue
* The target dispatch queue to which the function is submitted.
* The system will hold a reference on the target queue until the function
@@ -254,9 +254,9 @@
* @discussion
* Submits a block to a dispatch queue for multiple invocations. This function
* waits for the task block to complete before returning. If the target queue
- * is a concurrent queue returned by dispatch_get_concurrent_queue(), the block
- * may be invoked concurrently, and it must therefore be reentrant safe.
- *
+ * is concurrent, the block may be invoked concurrently, and it must therefore
+ * be reentrant safe.
+ *
* Each invocation of the block will be passed the current index of iteration.
*
* @param iterations
@@ -274,7 +274,8 @@
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
-dispatch_apply(size_t iterations, dispatch_queue_t queue, void (^block)(size_t));
+dispatch_apply(size_t iterations, dispatch_queue_t queue,
+ void (^block)(size_t));
#endif
/*!
@@ -315,13 +316,20 @@
*
* @abstract
* Returns the queue on which the currently executing block is running.
- *
+ *
* @discussion
* Returns the queue on which the currently executing block is running.
*
* When dispatch_get_current_queue() is called outside of the context of a
* submitted block, it will return the default concurrent queue.
*
+ * Recommended for debugging and logging purposes only:
+ * The code must not make any assumptions about the queue returned, unless it
+ * is one of the global queues or a queue the code has itself created.
+ * The code must not assume that synchronous execution onto a queue is safe
+ * from deadlock if that queue is not the one returned by
+ * dispatch_get_current_queue().
+ *
* @result
* Returns the current queue.
*/
@@ -350,7 +358,8 @@
#define dispatch_get_main_queue() (&_dispatch_main_q)
/*!
- * @enum dispatch_queue_priority_t
+ * @typedef dispatch_queue_priority_t
+ * Type of dispatch_queue_priority
*
* @constant DISPATCH_QUEUE_PRIORITY_HIGH
* Items dispatched to the queue will run at high priority,
@@ -368,12 +377,20 @@
* i.e. the queue will be scheduled for execution after all
* default priority and high priority queues have been
* scheduled.
+ *
+ * @constant DISPATCH_QUEUE_PRIORITY_BACKGROUND
+ * Items dispatched to the queue will run at background priority, i.e. the queue
+ * will be scheduled for execution after all higher priority queues have been
+ * scheduled and the system will run items on this queue on a thread with
+ * background status as per setpriority(2) (i.e. disk I/O is throttled and the
+ * thread's scheduling priority is set to lowest value).
*/
-enum {
- DISPATCH_QUEUE_PRIORITY_HIGH = 2,
- DISPATCH_QUEUE_PRIORITY_DEFAULT = 0,
- DISPATCH_QUEUE_PRIORITY_LOW = -2,
-};
+#define DISPATCH_QUEUE_PRIORITY_HIGH 2
+#define DISPATCH_QUEUE_PRIORITY_DEFAULT 0
+#define DISPATCH_QUEUE_PRIORITY_LOW (-2)
+#define DISPATCH_QUEUE_PRIORITY_BACKGROUND INT16_MIN
+
+typedef long dispatch_queue_priority_t;
/*!
* @function dispatch_get_global_queue
@@ -397,9 +414,26 @@
* Returns the requested global queue.
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_PURE DISPATCH_WARN_RESULT DISPATCH_NOTHROW
+DISPATCH_EXPORT DISPATCH_CONST DISPATCH_WARN_RESULT DISPATCH_NOTHROW
dispatch_queue_t
-dispatch_get_global_queue(long priority, unsigned long flags);
+dispatch_get_global_queue(dispatch_queue_priority_t priority,
+ unsigned long flags);
+
+/*!
+ * @const DISPATCH_QUEUE_SERIAL
+ * @discussion A dispatch queue that invokes blocks serially in FIFO order.
+ */
+#define DISPATCH_QUEUE_SERIAL NULL
+
+/*!
+ * @const DISPATCH_QUEUE_CONCURRENT
+ * @discussion A dispatch queue that may invoke blocks concurrently and supports
+ * barrier blocks submitted with the dispatch barrier API.
+ */
+#define DISPATCH_QUEUE_CONCURRENT (&_dispatch_queue_attr_concurrent)
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_4_3)
+DISPATCH_EXPORT
+struct dispatch_queue_attr_s _dispatch_queue_attr_concurrent;
/*!
* @function dispatch_queue_create
@@ -408,19 +442,29 @@
* Creates a new dispatch queue to which blocks may be submitted.
*
* @discussion
- * Dispatch queues invoke blocks serially in FIFO order.
+ * Dispatch queues created with the DISPATCH_QUEUE_SERIAL or a NULL attribute
+ * invoke blocks serially in FIFO order.
*
- * When the dispatch queue is no longer needed, it should be released
- * with dispatch_release(). Note that any pending blocks submitted
- * to a queue will hold a reference to that queue. Therefore a queue
- * will not be deallocated until all pending blocks have finished.
+ * Dispatch queues created with the DISPATCH_QUEUE_CONCURRENT attribute may
+ * invoke blocks concurrently (similarly to the global concurrent queues, but
+ * potentially with more overhead), and support barrier blocks submitted with
+ * the dispatch barrier API, which e.g. enables the implementation of efficient
+ * reader-writer schemes.
+ *
+ * When a dispatch queue is no longer needed, it should be released with
+ * dispatch_release(). Note that any pending blocks submitted to a queue will
+ * hold a reference to that queue. Therefore a queue will not be deallocated
+ * until all pending blocks have finished.
+ *
+ * The target queue of a newly created dispatch queue is the default priority
+ * global concurrent queue.
*
* @param label
* A string label to attach to the queue.
* This parameter is optional and may be NULL.
*
* @param attr
- * Unused. Pass NULL for now.
+ * DISPATCH_QUEUE_SERIAL or DISPATCH_QUEUE_CONCURRENT.
*
* @result
* The newly created dispatch queue.
@@ -444,11 +488,20 @@
* The label of the queue. The result may be NULL.
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_PURE DISPATCH_WARN_RESULT DISPATCH_NOTHROW
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_PURE DISPATCH_WARN_RESULT
+DISPATCH_NOTHROW
const char *
dispatch_queue_get_label(dispatch_queue_t queue);
/*!
+ * @const DISPATCH_TARGET_QUEUE_DEFAULT
+ * @discussion Constant to pass to the dispatch_set_target_queue() and
+ * dispatch_source_create() functions to indicate that the default target queue
+ * for the given object type should be used.
+ */
+#define DISPATCH_TARGET_QUEUE_DEFAULT NULL
+
+/*!
* @function dispatch_set_target_queue
*
* @abstract
@@ -457,27 +510,38 @@
* @discussion
* An object's target queue is responsible for processing the object.
*
- * A dispatch queue's priority is inherited by its target queue. Use the
+ * A dispatch queue's priority is inherited from its target queue. Use the
* dispatch_get_global_queue() function to obtain suitable target queue
* of the desired priority.
*
+ * Blocks submitted to a serial queue whose target queue is another serial
+ * queue will not be invoked concurrently with blocks submitted to the target
+ * queue or to any other queue with that same target queue.
+ *
+ * The result of introducing a cycle into the hierarchy of target queues is
+ * undefined.
+ *
* A dispatch source's target queue specifies where its event handler and
* cancellation handler blocks will be submitted.
*
- * The result of calling dispatch_set_target_queue() on any other type of
- * dispatch object is undefined.
+ * A dispatch I/O channel's target queue specifies where where its I/O
+ * operations are executed.
*
- * @param object
+ * For all other dispatch object types, the only function of the target queue
+ * is to determine where an object's finalizer function is invoked.
+ *
+ * @param object
* The object to modify.
* The result of passing NULL in this parameter is undefined.
*
- * @param queue
+ * @param queue
* The new target queue for the object. The queue is retained, and the
- * previous one, if any, is released.
- * The result of passing NULL in this parameter is undefined.
+ * previous target queue, if any, is released.
+ * If queue is DISPATCH_TARGET_QUEUE_DEFAULT, set the object's target queue
+ * to the default target queue for the given object type.
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
+DISPATCH_EXPORT DISPATCH_NOTHROW // DISPATCH_NONNULL1
void
dispatch_set_target_queue(dispatch_object_t object, dispatch_queue_t queue);
@@ -563,6 +627,243 @@
void *context,
dispatch_function_t work);
-__DISPATCH_END_DECLS
+/*!
+ * @functiongroup Dispatch Barrier API
+ * The dispatch barrier API is a mechanism for submitting barrier blocks to a
+ * dispatch queue, analogous to the dispatch_async()/dispatch_sync() API.
+ * It enables the implementation of efficient reader/writer schemes.
+ * Barrier blocks only behave specially when submitted to queues created with
+ * the DISPATCH_QUEUE_CONCURRENT attribute; on such a queue, a barrier block
+ * will not run until all blocks submitted to the queue earlier have completed,
+ * and any blocks submitted to the queue after a barrier block will not run
+ * until the barrier block has completed.
+ * When submitted to a a global queue or to a queue not created with the
+ * DISPATCH_QUEUE_CONCURRENT attribute, barrier blocks behave identically to
+ * blocks submitted with the dispatch_async()/dispatch_sync() API.
+ */
+
+/*!
+ * @function dispatch_barrier_async
+ *
+ * @abstract
+ * Submits a barrier block for asynchronous execution on a dispatch queue.
+ *
+ * @discussion
+ * Submits a block to a dispatch queue like dispatch_async(), but marks that
+ * block as a barrier (relevant only on DISPATCH_QUEUE_CONCURRENT queues).
+ *
+ * See dispatch_async() for details.
+ *
+ * @param queue
+ * The target dispatch queue to which the block is submitted.
+ * The system will hold a reference on the target queue until the block
+ * has finished.
+ * The result of passing NULL in this parameter is undefined.
+ *
+ * @param block
+ * The block to submit to the target dispatch queue. This function performs
+ * Block_copy() and Block_release() on behalf of callers.
+ * The result of passing NULL in this parameter is undefined.
+ */
+#ifdef __BLOCKS__
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_4_3)
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
+void
+dispatch_barrier_async(dispatch_queue_t queue, dispatch_block_t block);
+#endif
+
+/*!
+ * @function dispatch_barrier_async_f
+ *
+ * @abstract
+ * Submits a barrier function for asynchronous execution on a dispatch queue.
+ *
+ * @discussion
+ * Submits a function to a dispatch queue like dispatch_async_f(), but marks
+ * that function as a barrier (relevant only on DISPATCH_QUEUE_CONCURRENT
+ * queues).
+ *
+ * See dispatch_async_f() for details.
+ *
+ * @param queue
+ * The target dispatch queue to which the function is submitted.
+ * The system will hold a reference on the target queue until the function
+ * has returned.
+ * The result of passing NULL in this parameter is undefined.
+ *
+ * @param context
+ * The application-defined context parameter to pass to the function.
+ *
+ * @param work
+ * The application-defined function to invoke on the target queue. The first
+ * parameter passed to this function is the context provided to
+ * dispatch_barrier_async_f().
+ * The result of passing NULL in this parameter is undefined.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_4_3)
+DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL3 DISPATCH_NOTHROW
+void
+dispatch_barrier_async_f(dispatch_queue_t queue,
+ void *context,
+ dispatch_function_t work);
+
+/*!
+ * @function dispatch_barrier_sync
+ *
+ * @abstract
+ * Submits a barrier block for synchronous execution on a dispatch queue.
+ *
+ * @discussion
+ * Submits a block to a dispatch queue like dispatch_sync(), but marks that
+ * block as a barrier (relevant only on DISPATCH_QUEUE_CONCURRENT queues).
+ *
+ * See dispatch_sync() for details.
+ *
+ * @param queue
+ * The target dispatch queue to which the block is submitted.
+ * The result of passing NULL in this parameter is undefined.
+ *
+ * @param block
+ * The block to be invoked on the target dispatch queue.
+ * The result of passing NULL in this parameter is undefined.
+ */
+#ifdef __BLOCKS__
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_4_3)
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
+void
+dispatch_barrier_sync(dispatch_queue_t queue, dispatch_block_t block);
+#endif
+
+/*!
+ * @function dispatch_barrier_sync_f
+ *
+ * @abstract
+ * Submits a barrier function for synchronous execution on a dispatch queue.
+ *
+ * @discussion
+ * Submits a function to a dispatch queue like dispatch_sync_f(), but marks that
+ * fuction as a barrier (relevant only on DISPATCH_QUEUE_CONCURRENT queues).
+ *
+ * See dispatch_sync_f() for details.
+ *
+ * @param queue
+ * The target dispatch queue to which the function is submitted.
+ * The result of passing NULL in this parameter is undefined.
+ *
+ * @param context
+ * The application-defined context parameter to pass to the function.
+ *
+ * @param work
+ * The application-defined function to invoke on the target queue. The first
+ * parameter passed to this function is the context provided to
+ * dispatch_barrier_sync_f().
+ * The result of passing NULL in this parameter is undefined.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_4_3)
+DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL3 DISPATCH_NOTHROW
+void
+dispatch_barrier_sync_f(dispatch_queue_t queue,
+ void *context,
+ dispatch_function_t work);
+
+/*!
+ * @functiongroup Dispatch queue-specific contexts
+ * This API allows different subsystems to associate context to a shared queue
+ * without risk of collision and to retrieve that context from blocks executing
+ * on that queue or any of its child queues in the target queue hierarchy.
+ */
+
+/*!
+ * @function dispatch_queue_set_specific
+ *
+ * @abstract
+ * Associates a subsystem-specific context with a dispatch queue, for a key
+ * unique to the subsystem.
+ *
+ * @discussion
+ * The specified destructor will be invoked with the context on the default
+ * priority global concurrent queue when a new context is set for the same key,
+ * or after all references to the queue have been released.
+ *
+ * @param queue
+ * The dispatch queue to modify.
+ * The result of passing NULL in this parameter is undefined.
+ *
+ * @param key
+ * The key to set the context for, typically a pointer to a static variable
+ * specific to the subsystem. Keys are only compared as pointers and never
+ * dereferenced. Passing a string constant directly is not recommended.
+ * The NULL key is reserved and attemps to set a context for it are ignored.
+ *
+ * @param context
+ * The new subsystem-specific context for the object. This may be NULL.
+ *
+ * @param destructor
+ * The destructor function pointer. This may be NULL and is ignored if context
+ * is NULL.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL2 DISPATCH_NOTHROW
+void
+dispatch_queue_set_specific(dispatch_queue_t queue, const void *key,
+ void *context, dispatch_function_t destructor);
+
+/*!
+ * @function dispatch_queue_get_specific
+ *
+ * @abstract
+ * Returns the subsystem-specific context associated with a dispatch queue, for
+ * a key unique to the subsystem.
+ *
+ * @discussion
+ * Returns the context for the specified key if it has been set on the specified
+ * queue.
+ *
+ * @param queue
+ * The dispatch queue to query.
+ * The result of passing NULL in this parameter is undefined.
+ *
+ * @param key
+ * The key to get the context for, typically a pointer to a static variable
+ * specific to the subsystem. Keys are only compared as pointers and never
+ * dereferenced. Passing a string constant directly is not recommended.
+ *
+ * @result
+ * The context for the specified key or NULL if no context was found.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_PURE DISPATCH_WARN_RESULT
+DISPATCH_NOTHROW
+void *
+dispatch_queue_get_specific(dispatch_queue_t queue, const void *key);
+
+/*!
+ * @function dispatch_get_specific
+ *
+ * @abstract
+ * Returns the current subsystem-specific context for a key unique to the
+ * subsystem.
+ *
+ * @discussion
+ * When called from a block executing on a queue, returns the context for the
+ * specified key if it has been set on the queue, otherwise returns the result
+ * of dispatch_get_specific() executed on the queue's target queue or NULL
+ * if the current queue is a global concurrent queue.
+ *
+ * @param key
+ * The key to get the context for, typically a pointer to a static variable
+ * specific to the subsystem. Keys are only compared as pointers and never
+ * dereferenced. Passing a string constant directly is not recommended.
+ *
+ * @result
+ * The context for the specified key or NULL if no context was found.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_PURE DISPATCH_WARN_RESULT
+DISPATCH_NOTHROW
+void *
+dispatch_get_specific(const void *key);
+
+__END_DECLS
#endif
diff --git a/dispatch/semaphore.h b/dispatch/semaphore.h
index 3e9466d..19b50af 100644
--- a/dispatch/semaphore.h
+++ b/dispatch/semaphore.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -34,7 +34,7 @@
*/
DISPATCH_DECL(dispatch_semaphore);
-__DISPATCH_BEGIN_DECLS
+__BEGIN_DECLS
/*!
* @function dispatch_semaphore_create
@@ -56,7 +56,7 @@
* The newly created semaphore, or NULL on failure.
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_MALLOC DISPATCH_NOTHROW
+DISPATCH_EXPORT DISPATCH_MALLOC DISPATCH_WARN_RESULT DISPATCH_NOTHROW
dispatch_semaphore_t
dispatch_semaphore_create(long value);
@@ -107,6 +107,6 @@
long
dispatch_semaphore_signal(dispatch_semaphore_t dsema);
-__DISPATCH_END_DECLS
+__END_DECLS
#endif /* __DISPATCH_SEMAPHORE__ */
diff --git a/dispatch/source.h b/dispatch/source.h
index 8cf0ddb..4c9f601 100644
--- a/dispatch/source.h
+++ b/dispatch/source.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -184,17 +184,19 @@
const struct dispatch_source_type_s _dispatch_source_type_write;
/*!
- * @enum dispatch_source_mach_send_flags_t
+ * @typedef dispatch_source_mach_send_flags_t
+ * Type of dispatch_source_mach_send flags
*
* @constant DISPATCH_MACH_SEND_DEAD
* The receive right corresponding to the given send right was destroyed.
*/
-enum {
- DISPATCH_MACH_SEND_DEAD = 0x1,
-};
+#define DISPATCH_MACH_SEND_DEAD 0x1
+
+typedef unsigned long dispatch_source_mach_send_flags_t;
/*!
- * @enum dispatch_source_proc_flags_t
+ * @typedef dispatch_source_proc_flags_t
+ * Type of dispatch_source_proc flags
*
* @constant DISPATCH_PROC_EXIT
* The process has exited (perhaps cleanly, perhaps not).
@@ -209,15 +211,16 @@
* @constant DISPATCH_PROC_SIGNAL
* A Unix signal was delivered to the process.
*/
-enum {
- DISPATCH_PROC_EXIT = 0x80000000,
- DISPATCH_PROC_FORK = 0x40000000,
- DISPATCH_PROC_EXEC = 0x20000000,
- DISPATCH_PROC_SIGNAL = 0x08000000,
-};
+#define DISPATCH_PROC_EXIT 0x80000000
+#define DISPATCH_PROC_FORK 0x40000000
+#define DISPATCH_PROC_EXEC 0x20000000
+#define DISPATCH_PROC_SIGNAL 0x08000000
+
+typedef unsigned long dispatch_source_proc_flags_t;
/*!
- * @enum dispatch_source_vnode_flags_t
+ * @typedef dispatch_source_vnode_flags_t
+ * Type of dispatch_source_vnode flags
*
* @constant DISPATCH_VNODE_DELETE
* The filesystem object was deleted from the namespace.
@@ -240,17 +243,18 @@
* @constant DISPATCH_VNODE_REVOKE
* The filesystem object was revoked.
*/
-enum {
- DISPATCH_VNODE_DELETE = 0x1,
- DISPATCH_VNODE_WRITE = 0x2,
- DISPATCH_VNODE_EXTEND = 0x4,
- DISPATCH_VNODE_ATTRIB = 0x8,
- DISPATCH_VNODE_LINK = 0x10,
- DISPATCH_VNODE_RENAME = 0x20,
- DISPATCH_VNODE_REVOKE = 0x40,
-};
-__DISPATCH_BEGIN_DECLS
+#define DISPATCH_VNODE_DELETE 0x1
+#define DISPATCH_VNODE_WRITE 0x2
+#define DISPATCH_VNODE_EXTEND 0x4
+#define DISPATCH_VNODE_ATTRIB 0x8
+#define DISPATCH_VNODE_LINK 0x10
+#define DISPATCH_VNODE_RENAME 0x20
+#define DISPATCH_VNODE_REVOKE 0x40
+
+typedef unsigned long dispatch_source_vnode_flags_t;
+
+__BEGIN_DECLS
/*!
* @function dispatch_source_create
@@ -279,10 +283,12 @@
* A mask of flags specifying which events are desired. The interpretation of
* this argument is determined by the constant provided in the type parameter.
* @param queue
- * The dispatch queue to which the event handler block will be submited.
+ * The dispatch queue to which the event handler block will be submitted.
+ * If queue is DISPATCH_TARGET_QUEUE_DEFAULT, the source will submit the event
+ * handler block to the default priority global queue.
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_MALLOC DISPATCH_NOTHROW
+DISPATCH_EXPORT DISPATCH_MALLOC DISPATCH_WARN_RESULT DISPATCH_NOTHROW
dispatch_source_t
dispatch_source_create(dispatch_source_type_t type,
uintptr_t handle,
@@ -298,7 +304,7 @@
* @param source
* The dispatch source to modify.
* The result of passing NULL in this parameter is undefined.
- *
+ *
* @param handler
* The event handler block to submit to the source's target queue.
*/
@@ -355,7 +361,7 @@
* @param source
* The dispatch source to modify.
* The result of passing NULL in this parameter is undefined.
- *
+ *
* @param handler
* The cancellation handler block to submit to the source's target queue.
*/
@@ -432,7 +438,8 @@
* Non-zero if canceled and zero if not canceled.
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_WARN_RESULT DISPATCH_PURE
+DISPATCH_NOTHROW
long
dispatch_source_testcancel(dispatch_source_t source);
@@ -455,13 +462,14 @@
* DISPATCH_SOURCE_TYPE_MACH_RECV: mach port (mach_port_t)
* DISPATCH_SOURCE_TYPE_PROC: process identifier (pid_t)
* DISPATCH_SOURCE_TYPE_READ: file descriptor (int)
- * DISPATCH_SOURCE_TYPE_SIGNAL: signal number (int)
+ * DISPATCH_SOURCE_TYPE_SIGNAL: signal number (int)
* DISPATCH_SOURCE_TYPE_TIMER: n/a
* DISPATCH_SOURCE_TYPE_VNODE: file descriptor (int)
* DISPATCH_SOURCE_TYPE_WRITE: file descriptor (int)
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_WARN_RESULT DISPATCH_PURE DISPATCH_NOTHROW
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_WARN_RESULT DISPATCH_PURE
+DISPATCH_NOTHROW
uintptr_t
dispatch_source_get_handle(dispatch_source_t source);
@@ -490,7 +498,8 @@
* DISPATCH_SOURCE_TYPE_WRITE: n/a
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_WARN_RESULT DISPATCH_PURE DISPATCH_NOTHROW
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_WARN_RESULT DISPATCH_PURE
+DISPATCH_NOTHROW
unsigned long
dispatch_source_get_mask(dispatch_source_t source);
@@ -526,7 +535,8 @@
* DISPATCH_SOURCE_TYPE_WRITE: estimated buffer space available
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_WARN_RESULT DISPATCH_PURE DISPATCH_NOTHROW
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_WARN_RESULT DISPATCH_PURE
+DISPATCH_NOTHROW
unsigned long
dispatch_source_get_data(dispatch_source_t source);
@@ -559,14 +569,15 @@
*
* @discussion
* Calling this function has no effect if the timer source has already been
- * canceled.
- *
+ * canceled. Once this function returns, any pending timer data accumulated
+ * for the previous timer values has been cleared
+ *
* The start time argument also determines which clock will be used for the
* timer. If the start time is DISPATCH_TIME_NOW or created with
* dispatch_time() then the timer is based on mach_absolute_time(). Otherwise,
* if the start time of the timer is created with dispatch_walltime() then the
* timer is based on gettimeofday(3).
- *
+ *
* @param start
* The start time of the timer. See dispatch_time() and dispatch_walltime()
* for more information.
@@ -590,6 +601,59 @@
uint64_t interval,
uint64_t leeway);
-__DISPATCH_END_DECLS
+/*!
+ * @function dispatch_source_set_registration_handler
+ *
+ * @abstract
+ * Sets the registration handler block for the given dispatch source.
+ *
+ * @discussion
+ * The registration handler (if specified) will be submitted to the source's
+ * target queue once the corresponding kevent() has been registered with the
+ * system, following the initial dispatch_resume() of the source.
+ *
+ * If a source is already registered when the registration handler is set, the
+ * registration handler will be invoked immediately.
+ *
+ * @param source
+ * The dispatch source to modify.
+ * The result of passing NULL in this parameter is undefined.
+ *
+ * @param handler
+ * The registration handler block to submit to the source's target queue.
+ */
+#ifdef __BLOCKS__
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_4_3)
+DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NOTHROW
+void
+dispatch_source_set_registration_handler(dispatch_source_t source,
+ dispatch_block_t registration_handler);
+#endif /* __BLOCKS__ */
+
+/*!
+ * @function dispatch_source_set_registration_handler_f
+ *
+ * @abstract
+ * Sets the registration handler function for the given dispatch source.
+ *
+ * @discussion
+ * See dispatch_source_set_registration_handler() for more details.
+ *
+ * @param source
+ * The dispatch source to modify.
+ * The result of passing NULL in this parameter is undefined.
+ *
+ * @param handler
+ * The registration handler function to submit to the source's target queue.
+ * The context parameter passed to the registration handler function is the
+ * current context of the dispatch source at the time the handler call is made.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_4_3)
+DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NOTHROW
+void
+dispatch_source_set_registration_handler_f(dispatch_source_t source,
+ dispatch_function_t registration_handler);
+
+__END_DECLS
#endif
diff --git a/dispatch/time.h b/dispatch/time.h
index 908f0d9..d39578d 100644
--- a/dispatch/time.h
+++ b/dispatch/time.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -28,7 +28,7 @@
#include <stdint.h>
-__DISPATCH_BEGIN_DECLS
+__BEGIN_DECLS
struct timespec;
@@ -54,7 +54,7 @@
* @typedef dispatch_time_t
*
* @abstract
- * An somewhat abstract representation of time; where zero means "now" and
+ * A somewhat abstract representation of time; where zero means "now" and
* DISPATCH_TIME_FOREVER means "infinity" and every value in between is an
* opaque encoding.
*/
@@ -84,7 +84,7 @@
* A new dispatch_time_t.
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NOTHROW
+DISPATCH_EXPORT DISPATCH_WARN_RESULT DISPATCH_NOTHROW
dispatch_time_t
dispatch_time(dispatch_time_t when, int64_t delta);
@@ -108,10 +108,10 @@
* A new dispatch_time_t.
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NOTHROW
+DISPATCH_EXPORT DISPATCH_WARN_RESULT DISPATCH_NOTHROW
dispatch_time_t
dispatch_walltime(const struct timespec *when, int64_t delta);
-__DISPATCH_END_DECLS
+__END_DECLS
#endif
diff --git a/libdispatch.xcodeproj/project.pbxproj b/libdispatch.xcodeproj/project.pbxproj
index a000481..e36948c 100644
--- a/libdispatch.xcodeproj/project.pbxproj
+++ b/libdispatch.xcodeproj/project.pbxproj
@@ -3,12 +3,41 @@
archiveVersion = 1;
classes = {
};
- objectVersion = 45;
+ objectVersion = 46;
objects = {
+/* Begin PBXAggregateTarget section */
+ 3F3C9326128E637B0042B1F7 /* libdispatch_Sim */ = {
+ isa = PBXAggregateTarget;
+ buildConfigurationList = 3F3C9356128E637B0042B1F7 /* Build configuration list for PBXAggregateTarget "libdispatch_Sim" */;
+ buildPhases = (
+ );
+ dependencies = (
+ E4128E4A13B94BCE00ABB2CB /* PBXTargetDependency */,
+ );
+ name = libdispatch_Sim;
+ productName = libdispatch_Sim;
+ };
+ C927F35A10FD7F0600C5AB8B /* libdispatch_tools */ = {
+ isa = PBXAggregateTarget;
+ buildConfigurationList = C927F35E10FD7F0B00C5AB8B /* Build configuration list for PBXAggregateTarget "libdispatch_tools" */;
+ buildPhases = (
+ );
+ dependencies = (
+ C927F36910FD7F1A00C5AB8B /* PBXTargetDependency */,
+ );
+ name = libdispatch_tools;
+ productName = ddt;
+ };
+/* End PBXAggregateTarget section */
+
/* Begin PBXBuildFile section */
- 2EC9C9B80E8809EF00E2499A /* legacy.c in Sources */ = {isa = PBXBuildFile; fileRef = 2EC9C9B70E8809EF00E2499A /* legacy.c */; };
+ 5A0095A210F274B0000E2A31 /* io_internal.h in Headers */ = {isa = PBXBuildFile; fileRef = 5A0095A110F274B0000E2A31 /* io_internal.h */; };
+ 5A27262610F26F1900751FBC /* io.c in Sources */ = {isa = PBXBuildFile; fileRef = 5A27262510F26F1900751FBC /* io.c */; };
5A5D13AC0F6B280500197CC3 /* semaphore_internal.h in Headers */ = {isa = PBXBuildFile; fileRef = 5A5D13AB0F6B280500197CC3 /* semaphore_internal.h */; };
+ 5AAB45C010D30B79004407EA /* data.c in Sources */ = {isa = PBXBuildFile; fileRef = 5AAB45BF10D30B79004407EA /* data.c */; };
+ 5AAB45C410D30CC7004407EA /* io.h in Headers */ = {isa = PBXBuildFile; fileRef = 5AAB45C310D30CC7004407EA /* io.h */; settings = {ATTRIBUTES = (Public, ); }; };
+ 5AAB45C610D30D0C004407EA /* data.h in Headers */ = {isa = PBXBuildFile; fileRef = 5AAB45C510D30D0C004407EA /* data.h */; settings = {ATTRIBUTES = (Public, ); }; };
721F5C5D0F15520500FF03A6 /* semaphore.h in Headers */ = {isa = PBXBuildFile; fileRef = 721F5C5C0F15520500FF03A6 /* semaphore.h */; settings = {ATTRIBUTES = (Public, ); }; };
721F5CCF0F15553500FF03A6 /* semaphore.c in Sources */ = {isa = PBXBuildFile; fileRef = 721F5CCE0F15553500FF03A6 /* semaphore.c */; };
72CC94300ECCD8750031B751 /* base.h in Headers */ = {isa = PBXBuildFile; fileRef = 72CC942F0ECCD8750031B751 /* base.h */; settings = {ATTRIBUTES = (Public, ); }; };
@@ -20,16 +49,96 @@
965ECC210F3EAB71004DDD89 /* object_internal.h in Headers */ = {isa = PBXBuildFile; fileRef = 965ECC200F3EAB71004DDD89 /* object_internal.h */; };
9661E56B0F3E7DDF00749F3E /* object.c in Sources */ = {isa = PBXBuildFile; fileRef = 9661E56A0F3E7DDF00749F3E /* object.c */; };
9676A0E10F3E755D00713ADB /* apply.c in Sources */ = {isa = PBXBuildFile; fileRef = 9676A0E00F3E755D00713ADB /* apply.c */; };
- 96929D840F3EA1020041FF5D /* hw_shims.h in Headers */ = {isa = PBXBuildFile; fileRef = 96929D820F3EA1020041FF5D /* hw_shims.h */; };
- 96929D850F3EA1020041FF5D /* os_shims.h in Headers */ = {isa = PBXBuildFile; fileRef = 96929D830F3EA1020041FF5D /* os_shims.h */; };
+ 96929D840F3EA1020041FF5D /* atomic.h in Headers */ = {isa = PBXBuildFile; fileRef = 96929D820F3EA1020041FF5D /* atomic.h */; };
+ 96929D850F3EA1020041FF5D /* shims.h in Headers */ = {isa = PBXBuildFile; fileRef = 96929D830F3EA1020041FF5D /* shims.h */; };
96929D960F3EA2170041FF5D /* queue_internal.h in Headers */ = {isa = PBXBuildFile; fileRef = 96929D950F3EA2170041FF5D /* queue_internal.h */; };
96A8AA870F41E7A400CD570B /* source.c in Sources */ = {isa = PBXBuildFile; fileRef = 96A8AA860F41E7A400CD570B /* source.c */; };
96BC39BD0F3EBAB100C59689 /* queue_private.h in Headers */ = {isa = PBXBuildFile; fileRef = 96BC39BC0F3EBAB100C59689 /* queue_private.h */; settings = {ATTRIBUTES = (Private, ); }; };
96C9553B0F3EAEDD000D2CA4 /* once.h in Headers */ = {isa = PBXBuildFile; fileRef = 96C9553A0F3EAEDD000D2CA4 /* once.h */; settings = {ATTRIBUTES = (Public, ); }; };
96DF70BE0F38FE3C0074BD99 /* once.c in Sources */ = {isa = PBXBuildFile; fileRef = 96DF70BD0F38FE3C0074BD99 /* once.c */; };
- E4BF990110A89607007655D0 /* time.c in Sources */ = {isa = PBXBuildFile; fileRef = E4BF990010A89607007655D0 /* time.c */; };
+ E4128ED613BA9A1700ABB2CB /* hw_config.h in Headers */ = {isa = PBXBuildFile; fileRef = E4128ED513BA9A1700ABB2CB /* hw_config.h */; };
+ E4128ED713BA9A1700ABB2CB /* hw_config.h in Headers */ = {isa = PBXBuildFile; fileRef = E4128ED513BA9A1700ABB2CB /* hw_config.h */; };
+ E417A38412A472C4004D659D /* provider.d in Sources */ = {isa = PBXBuildFile; fileRef = E43570B8126E93380097AB9F /* provider.d */; };
+ E417A38512A472C5004D659D /* provider.d in Sources */ = {isa = PBXBuildFile; fileRef = E43570B8126E93380097AB9F /* provider.d */; };
+ E422A0D512A557B5005E5BDB /* trace.h in Headers */ = {isa = PBXBuildFile; fileRef = E422A0D412A557B5005E5BDB /* trace.h */; };
+ E422A0D612A557B5005E5BDB /* trace.h in Headers */ = {isa = PBXBuildFile; fileRef = E422A0D412A557B5005E5BDB /* trace.h */; };
+ E43570B9126E93380097AB9F /* provider.d in Sources */ = {isa = PBXBuildFile; fileRef = E43570B8126E93380097AB9F /* provider.d */; };
+ E43570BA126E93380097AB9F /* provider.d in Sources */ = {isa = PBXBuildFile; fileRef = E43570B8126E93380097AB9F /* provider.d */; };
+ E44EBE3E1251659900645D88 /* init.c in Sources */ = {isa = PBXBuildFile; fileRef = E44EBE3B1251659900645D88 /* init.c */; };
+ E44EBE5412517EBE00645D88 /* protocol.defs in Sources */ = {isa = PBXBuildFile; fileRef = FC7BED950E8361E600161930 /* protocol.defs */; settings = {ATTRIBUTES = (Client, Server, ); }; };
+ E44EBE5512517EBE00645D88 /* init.c in Sources */ = {isa = PBXBuildFile; fileRef = E44EBE3B1251659900645D88 /* init.c */; };
+ E44EBE5612517EBE00645D88 /* protocol.defs in Sources */ = {isa = PBXBuildFile; fileRef = FC7BED950E8361E600161930 /* protocol.defs */; settings = {ATTRIBUTES = (Client, Server, ); }; };
+ E44EBE5712517EBE00645D88 /* init.c in Sources */ = {isa = PBXBuildFile; fileRef = E44EBE3B1251659900645D88 /* init.c */; };
+ E49F2423125D3C960057C971 /* resolver.c in Sources */ = {isa = PBXBuildFile; fileRef = E44EBE371251656400645D88 /* resolver.c */; };
+ E49F2424125D3C970057C971 /* resolver.c in Sources */ = {isa = PBXBuildFile; fileRef = E44EBE371251656400645D88 /* resolver.c */; };
+ E49F2499125D48D80057C971 /* resolver.c in Sources */ = {isa = PBXBuildFile; fileRef = E44EBE371251656400645D88 /* resolver.c */; };
+ E49F24AB125D57FA0057C971 /* dispatch.h in Headers */ = {isa = PBXBuildFile; fileRef = FC7BED960E8361E600161930 /* dispatch.h */; settings = {ATTRIBUTES = (Public, ); }; };
+ E49F24AC125D57FA0057C971 /* base.h in Headers */ = {isa = PBXBuildFile; fileRef = 72CC942F0ECCD8750031B751 /* base.h */; settings = {ATTRIBUTES = (Public, ); }; };
+ E49F24AD125D57FA0057C971 /* object.h in Headers */ = {isa = PBXBuildFile; fileRef = 961B994F0F3E85C30006BC96 /* object.h */; settings = {ATTRIBUTES = (Public, ); }; };
+ E49F24AE125D57FA0057C971 /* queue.h in Headers */ = {isa = PBXBuildFile; fileRef = FC7BED8B0E8361E600161930 /* queue.h */; settings = {ATTRIBUTES = (Public, ); }; };
+ E49F24AF125D57FA0057C971 /* source.h in Headers */ = {isa = PBXBuildFile; fileRef = FC7BED8D0E8361E600161930 /* source.h */; settings = {ATTRIBUTES = (Public, ); }; };
+ E49F24B0125D57FA0057C971 /* semaphore.h in Headers */ = {isa = PBXBuildFile; fileRef = 721F5C5C0F15520500FF03A6 /* semaphore.h */; settings = {ATTRIBUTES = (Public, ); }; };
+ E49F24B1125D57FA0057C971 /* group.h in Headers */ = {isa = PBXBuildFile; fileRef = FC5C9C1D0EADABE3006E462D /* group.h */; settings = {ATTRIBUTES = (Public, ); }; };
+ E49F24B2125D57FA0057C971 /* once.h in Headers */ = {isa = PBXBuildFile; fileRef = 96C9553A0F3EAEDD000D2CA4 /* once.h */; settings = {ATTRIBUTES = (Public, ); }; };
+ E49F24B3125D57FA0057C971 /* io.h in Headers */ = {isa = PBXBuildFile; fileRef = 5AAB45C310D30CC7004407EA /* io.h */; settings = {ATTRIBUTES = (Public, ); }; };
+ E49F24B4125D57FA0057C971 /* data.h in Headers */ = {isa = PBXBuildFile; fileRef = 5AAB45C510D30D0C004407EA /* data.h */; settings = {ATTRIBUTES = (Public, ); }; };
+ E49F24B5125D57FA0057C971 /* time.h in Headers */ = {isa = PBXBuildFile; fileRef = 96032E4C0F5CC8D100241C5F /* time.h */; settings = {ATTRIBUTES = (Public, ); }; };
+ E49F24B6125D57FA0057C971 /* private.h in Headers */ = {isa = PBXBuildFile; fileRef = FC7BED930E8361E600161930 /* private.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ E49F24B7125D57FA0057C971 /* queue_private.h in Headers */ = {isa = PBXBuildFile; fileRef = 96BC39BC0F3EBAB100C59689 /* queue_private.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ E49F24B8125D57FA0057C971 /* source_private.h in Headers */ = {isa = PBXBuildFile; fileRef = FCEF047F0F5661960067401F /* source_private.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ E49F24B9125D57FA0057C971 /* benchmark.h in Headers */ = {isa = PBXBuildFile; fileRef = 961B99350F3E83980006BC96 /* benchmark.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ E49F24BA125D57FA0057C971 /* internal.h in Headers */ = {isa = PBXBuildFile; fileRef = FC7BED8F0E8361E600161930 /* internal.h */; settings = {ATTRIBUTES = (); }; };
+ E49F24BB125D57FA0057C971 /* queue_internal.h in Headers */ = {isa = PBXBuildFile; fileRef = 96929D950F3EA2170041FF5D /* queue_internal.h */; };
+ E49F24BC125D57FA0057C971 /* object_internal.h in Headers */ = {isa = PBXBuildFile; fileRef = 965ECC200F3EAB71004DDD89 /* object_internal.h */; };
+ E49F24BD125D57FA0057C971 /* semaphore_internal.h in Headers */ = {isa = PBXBuildFile; fileRef = 5A5D13AB0F6B280500197CC3 /* semaphore_internal.h */; };
+ E49F24BE125D57FA0057C971 /* source_internal.h in Headers */ = {isa = PBXBuildFile; fileRef = FC0B34780FA2851C0080FFA0 /* source_internal.h */; };
+ E49F24BF125D57FA0057C971 /* io_internal.h in Headers */ = {isa = PBXBuildFile; fileRef = 5A0095A110F274B0000E2A31 /* io_internal.h */; };
+ E49F24C1125D57FA0057C971 /* tsd.h in Headers */ = {isa = PBXBuildFile; fileRef = FC1832A4109923C7003403D5 /* tsd.h */; };
+ E49F24C2125D57FA0057C971 /* atomic.h in Headers */ = {isa = PBXBuildFile; fileRef = 96929D820F3EA1020041FF5D /* atomic.h */; };
+ E49F24C3125D57FA0057C971 /* shims.h in Headers */ = {isa = PBXBuildFile; fileRef = 96929D830F3EA1020041FF5D /* shims.h */; };
+ E49F24C4125D57FA0057C971 /* time.h in Headers */ = {isa = PBXBuildFile; fileRef = FC1832A3109923C7003403D5 /* time.h */; };
+ E49F24C5125D57FA0057C971 /* perfmon.h in Headers */ = {isa = PBXBuildFile; fileRef = FC1832A2109923C7003403D5 /* perfmon.h */; };
+ E49F24C6125D57FA0057C971 /* config.h in Headers */ = {isa = PBXBuildFile; fileRef = FC9C70E7105EC9620074F9CA /* config.h */; };
+ E49F24C8125D57FA0057C971 /* protocol.defs in Sources */ = {isa = PBXBuildFile; fileRef = FC7BED950E8361E600161930 /* protocol.defs */; settings = {ATTRIBUTES = (Client, Server, ); }; };
+ E49F24C9125D57FA0057C971 /* resolver.c in Sources */ = {isa = PBXBuildFile; fileRef = E44EBE371251656400645D88 /* resolver.c */; };
+ E49F24CA125D57FA0057C971 /* init.c in Sources */ = {isa = PBXBuildFile; fileRef = E44EBE3B1251659900645D88 /* init.c */; };
+ E49F24CB125D57FA0057C971 /* queue.c in Sources */ = {isa = PBXBuildFile; fileRef = FC7BED8A0E8361E600161930 /* queue.c */; };
+ E49F24CC125D57FA0057C971 /* semaphore.c in Sources */ = {isa = PBXBuildFile; fileRef = 721F5CCE0F15553500FF03A6 /* semaphore.c */; };
+ E49F24CD125D57FA0057C971 /* once.c in Sources */ = {isa = PBXBuildFile; fileRef = 96DF70BD0F38FE3C0074BD99 /* once.c */; };
+ E49F24CE125D57FA0057C971 /* apply.c in Sources */ = {isa = PBXBuildFile; fileRef = 9676A0E00F3E755D00713ADB /* apply.c */; };
+ E49F24CF125D57FA0057C971 /* object.c in Sources */ = {isa = PBXBuildFile; fileRef = 9661E56A0F3E7DDF00749F3E /* object.c */; };
+ E49F24D0125D57FA0057C971 /* benchmark.c in Sources */ = {isa = PBXBuildFile; fileRef = 965CD6340F3E806200D4E28D /* benchmark.c */; };
+ E49F24D1125D57FA0057C971 /* source.c in Sources */ = {isa = PBXBuildFile; fileRef = 96A8AA860F41E7A400CD570B /* source.c */; };
+ E49F24D2125D57FA0057C971 /* time.c in Sources */ = {isa = PBXBuildFile; fileRef = 96032E4A0F5CC8C700241C5F /* time.c */; };
+ E49F24D3125D57FA0057C971 /* data.c in Sources */ = {isa = PBXBuildFile; fileRef = 5AAB45BF10D30B79004407EA /* data.c */; };
+ E49F24D4125D57FA0057C971 /* io.c in Sources */ = {isa = PBXBuildFile; fileRef = 5A27262510F26F1900751FBC /* io.c */; };
+ E4BA743B13A8911B0095BDF1 /* getprogname.h in Headers */ = {isa = PBXBuildFile; fileRef = E4BA743913A8911B0095BDF1 /* getprogname.h */; };
+ E4BA743C13A8911B0095BDF1 /* getprogname.h in Headers */ = {isa = PBXBuildFile; fileRef = E4BA743913A8911B0095BDF1 /* getprogname.h */; };
+ E4BA743F13A8911B0095BDF1 /* malloc_zone.h in Headers */ = {isa = PBXBuildFile; fileRef = E4BA743A13A8911B0095BDF1 /* malloc_zone.h */; };
+ E4BA744013A8911B0095BDF1 /* malloc_zone.h in Headers */ = {isa = PBXBuildFile; fileRef = E4BA743A13A8911B0095BDF1 /* malloc_zone.h */; };
+ E4C1ED6F1263E714000D3C8B /* data_internal.h in Headers */ = {isa = PBXBuildFile; fileRef = E4C1ED6E1263E714000D3C8B /* data_internal.h */; };
+ E4C1ED701263E714000D3C8B /* data_internal.h in Headers */ = {isa = PBXBuildFile; fileRef = E4C1ED6E1263E714000D3C8B /* data_internal.h */; };
+ E4EC11AE12514302000DDBD1 /* queue.c in Sources */ = {isa = PBXBuildFile; fileRef = FC7BED8A0E8361E600161930 /* queue.c */; };
+ E4EC11AF12514302000DDBD1 /* semaphore.c in Sources */ = {isa = PBXBuildFile; fileRef = 721F5CCE0F15553500FF03A6 /* semaphore.c */; };
+ E4EC11B012514302000DDBD1 /* once.c in Sources */ = {isa = PBXBuildFile; fileRef = 96DF70BD0F38FE3C0074BD99 /* once.c */; };
+ E4EC11B112514302000DDBD1 /* apply.c in Sources */ = {isa = PBXBuildFile; fileRef = 9676A0E00F3E755D00713ADB /* apply.c */; };
+ E4EC11B212514302000DDBD1 /* object.c in Sources */ = {isa = PBXBuildFile; fileRef = 9661E56A0F3E7DDF00749F3E /* object.c */; };
+ E4EC11B312514302000DDBD1 /* benchmark.c in Sources */ = {isa = PBXBuildFile; fileRef = 965CD6340F3E806200D4E28D /* benchmark.c */; };
+ E4EC11B412514302000DDBD1 /* source.c in Sources */ = {isa = PBXBuildFile; fileRef = 96A8AA860F41E7A400CD570B /* source.c */; };
+ E4EC11B512514302000DDBD1 /* time.c in Sources */ = {isa = PBXBuildFile; fileRef = 96032E4A0F5CC8C700241C5F /* time.c */; };
+ E4EC11B712514302000DDBD1 /* data.c in Sources */ = {isa = PBXBuildFile; fileRef = 5AAB45BF10D30B79004407EA /* data.c */; };
+ E4EC11B812514302000DDBD1 /* io.c in Sources */ = {isa = PBXBuildFile; fileRef = 5A27262510F26F1900751FBC /* io.c */; };
+ E4EC121A12514715000DDBD1 /* queue.c in Sources */ = {isa = PBXBuildFile; fileRef = FC7BED8A0E8361E600161930 /* queue.c */; };
+ E4EC121B12514715000DDBD1 /* semaphore.c in Sources */ = {isa = PBXBuildFile; fileRef = 721F5CCE0F15553500FF03A6 /* semaphore.c */; };
+ E4EC121C12514715000DDBD1 /* once.c in Sources */ = {isa = PBXBuildFile; fileRef = 96DF70BD0F38FE3C0074BD99 /* once.c */; };
+ E4EC121D12514715000DDBD1 /* apply.c in Sources */ = {isa = PBXBuildFile; fileRef = 9676A0E00F3E755D00713ADB /* apply.c */; };
+ E4EC121E12514715000DDBD1 /* object.c in Sources */ = {isa = PBXBuildFile; fileRef = 9661E56A0F3E7DDF00749F3E /* object.c */; };
+ E4EC121F12514715000DDBD1 /* benchmark.c in Sources */ = {isa = PBXBuildFile; fileRef = 965CD6340F3E806200D4E28D /* benchmark.c */; };
+ E4EC122012514715000DDBD1 /* source.c in Sources */ = {isa = PBXBuildFile; fileRef = 96A8AA860F41E7A400CD570B /* source.c */; };
+ E4EC122112514715000DDBD1 /* time.c in Sources */ = {isa = PBXBuildFile; fileRef = 96032E4A0F5CC8C700241C5F /* time.c */; };
+ E4EC122312514715000DDBD1 /* data.c in Sources */ = {isa = PBXBuildFile; fileRef = 5AAB45BF10D30B79004407EA /* data.c */; };
+ E4EC122412514715000DDBD1 /* io.c in Sources */ = {isa = PBXBuildFile; fileRef = 5A27262510F26F1900751FBC /* io.c */; };
FC0B34790FA2851C0080FFA0 /* source_internal.h in Headers */ = {isa = PBXBuildFile; fileRef = FC0B34780FA2851C0080FFA0 /* source_internal.h */; };
- FC18329F109923A7003403D5 /* mach.c in Sources */ = {isa = PBXBuildFile; fileRef = FC18329E109923A7003403D5 /* mach.c */; };
FC1832A6109923C7003403D5 /* perfmon.h in Headers */ = {isa = PBXBuildFile; fileRef = FC1832A2109923C7003403D5 /* perfmon.h */; };
FC1832A7109923C7003403D5 /* time.h in Headers */ = {isa = PBXBuildFile; fileRef = FC1832A3109923C7003403D5 /* time.h */; };
FC1832A8109923C7003403D5 /* tsd.h in Headers */ = {isa = PBXBuildFile; fileRef = FC1832A4109923C7003403D5 /* tsd.h */; };
@@ -38,7 +147,6 @@
FC7BED9A0E8361E600161930 /* queue.h in Headers */ = {isa = PBXBuildFile; fileRef = FC7BED8B0E8361E600161930 /* queue.h */; settings = {ATTRIBUTES = (Public, ); }; };
FC7BED9C0E8361E600161930 /* source.h in Headers */ = {isa = PBXBuildFile; fileRef = FC7BED8D0E8361E600161930 /* source.h */; settings = {ATTRIBUTES = (Public, ); }; };
FC7BED9E0E8361E600161930 /* internal.h in Headers */ = {isa = PBXBuildFile; fileRef = FC7BED8F0E8361E600161930 /* internal.h */; settings = {ATTRIBUTES = (); }; };
- FC7BED9F0E8361E600161930 /* legacy.h in Headers */ = {isa = PBXBuildFile; fileRef = FC7BED900E8361E600161930 /* legacy.h */; settings = {ATTRIBUTES = (Private, ); }; };
FC7BEDA20E8361E600161930 /* private.h in Headers */ = {isa = PBXBuildFile; fileRef = FC7BED930E8361E600161930 /* private.h */; settings = {ATTRIBUTES = (Private, ); }; };
FC7BEDA40E8361E600161930 /* protocol.defs in Sources */ = {isa = PBXBuildFile; fileRef = FC7BED950E8361E600161930 /* protocol.defs */; settings = {ATTRIBUTES = (Client, Server, ); }; };
FC7BEDA50E8361E600161930 /* dispatch.h in Headers */ = {isa = PBXBuildFile; fileRef = FC7BED960E8361E600161930 /* dispatch.h */; settings = {ATTRIBUTES = (Public, ); }; };
@@ -46,58 +154,122 @@
FCEF04800F5661960067401F /* source_private.h in Headers */ = {isa = PBXBuildFile; fileRef = FCEF047F0F5661960067401F /* source_private.h */; settings = {ATTRIBUTES = (Private, ); }; };
/* End PBXBuildFile section */
+/* Begin PBXContainerItemProxy section */
+ C927F36610FD7F1000C5AB8B /* PBXContainerItemProxy */ = {
+ isa = PBXContainerItemProxy;
+ containerPortal = C927F35F10FD7F1000C5AB8B /* ddt.xcodeproj */;
+ proxyType = 2;
+ remoteGlobalIDString = FCFA5AA010D1AE050074F59A;
+ remoteInfo = ddt;
+ };
+ C927F36810FD7F1A00C5AB8B /* PBXContainerItemProxy */ = {
+ isa = PBXContainerItemProxy;
+ containerPortal = C927F35F10FD7F1000C5AB8B /* ddt.xcodeproj */;
+ proxyType = 1;
+ remoteGlobalIDString = FCFA5A9F10D1AE050074F59A;
+ remoteInfo = ddt;
+ };
+ E4128E4913B94BCE00ABB2CB /* PBXContainerItemProxy */ = {
+ isa = PBXContainerItemProxy;
+ containerPortal = 08FB7793FE84155DC02AAC07 /* Project object */;
+ proxyType = 1;
+ remoteGlobalIDString = D2AAC045055464E500DB518D;
+ remoteInfo = libdispatch;
+ };
+ E47D6ECA125FEB9D0070D91C /* PBXContainerItemProxy */ = {
+ isa = PBXContainerItemProxy;
+ containerPortal = 08FB7793FE84155DC02AAC07 /* Project object */;
+ proxyType = 1;
+ remoteGlobalIDString = E4EC118F12514302000DDBD1;
+ remoteInfo = "libdispatch up resolved";
+ };
+ E47D6ECC125FEBA10070D91C /* PBXContainerItemProxy */ = {
+ isa = PBXContainerItemProxy;
+ containerPortal = 08FB7793FE84155DC02AAC07 /* Project object */;
+ proxyType = 1;
+ remoteGlobalIDString = E4EC121612514715000DDBD1;
+ remoteInfo = "libdispatch mp resolved";
+ };
+/* End PBXContainerItemProxy section */
+
/* Begin PBXFileReference section */
- 2EC9C9B70E8809EF00E2499A /* legacy.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = legacy.c; path = src/legacy.c; sourceTree = "<group>"; };
- 5A5D13AB0F6B280500197CC3 /* semaphore_internal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = semaphore_internal.h; path = src/semaphore_internal.h; sourceTree = "<group>"; };
- 721F5C5C0F15520500FF03A6 /* semaphore.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = semaphore.h; path = dispatch/semaphore.h; sourceTree = "<group>"; };
- 721F5CCE0F15553500FF03A6 /* semaphore.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = semaphore.c; path = src/semaphore.c; sourceTree = "<group>"; };
- 72B54F690EB169EB00DBECBA /* dispatch_source_create.3 */ = {isa = PBXFileReference; explicitFileType = text.man; fileEncoding = 4; name = dispatch_source_create.3; path = man/dispatch_source_create.3; sourceTree = "<group>"; };
- 72CC940C0ECCD5720031B751 /* dispatch_object.3 */ = {isa = PBXFileReference; explicitFileType = text.man; fileEncoding = 4; name = dispatch_object.3; path = man/dispatch_object.3; sourceTree = "<group>"; };
- 72CC940D0ECCD5720031B751 /* dispatch.3 */ = {isa = PBXFileReference; explicitFileType = text.man; fileEncoding = 4; name = dispatch.3; path = man/dispatch.3; sourceTree = "<group>"; };
- 72CC942F0ECCD8750031B751 /* base.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = base.h; path = dispatch/base.h; sourceTree = "<group>"; };
- 96032E4A0F5CC8C700241C5F /* time.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = time.c; path = src/time.c; sourceTree = "<group>"; };
- 96032E4C0F5CC8D100241C5F /* time.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = time.h; path = dispatch/time.h; sourceTree = "<group>"; };
- 960F0E7D0F3FB232000D88BF /* dispatch_apply.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; name = dispatch_apply.3; path = man/dispatch_apply.3; sourceTree = "<group>"; };
- 960F0E7E0F3FB232000D88BF /* dispatch_once.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; name = dispatch_once.3; path = man/dispatch_once.3; sourceTree = "<group>"; };
- 961B99350F3E83980006BC96 /* benchmark.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = benchmark.h; path = dispatch/benchmark.h; sourceTree = "<group>"; };
- 961B994F0F3E85C30006BC96 /* object.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = object.h; path = dispatch/object.h; sourceTree = "<group>"; };
- 963FDDE50F3FB6BD00BF2D00 /* dispatch_semaphore_create.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; name = dispatch_semaphore_create.3; path = man/dispatch_semaphore_create.3; sourceTree = "<group>"; };
- 965CD6340F3E806200D4E28D /* benchmark.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = benchmark.c; path = src/benchmark.c; sourceTree = "<group>"; };
- 965ECC200F3EAB71004DDD89 /* object_internal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = object_internal.h; path = src/object_internal.h; sourceTree = "<group>"; };
- 9661E56A0F3E7DDF00749F3E /* object.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = object.c; path = src/object.c; sourceTree = "<group>"; };
- 9676A0E00F3E755D00713ADB /* apply.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = apply.c; path = src/apply.c; sourceTree = "<group>"; };
- 96859A3D0EF71BAD003EB3FB /* dispatch_benchmark.3 */ = {isa = PBXFileReference; explicitFileType = text.man; fileEncoding = 4; name = dispatch_benchmark.3; path = man/dispatch_benchmark.3; sourceTree = "<group>"; };
- 96929D820F3EA1020041FF5D /* hw_shims.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = hw_shims.h; path = src/hw_shims.h; sourceTree = "<group>"; };
- 96929D830F3EA1020041FF5D /* os_shims.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = os_shims.h; path = src/os_shims.h; sourceTree = "<group>"; };
- 96929D950F3EA2170041FF5D /* queue_internal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = queue_internal.h; path = src/queue_internal.h; sourceTree = "<group>"; };
- 96A8AA860F41E7A400CD570B /* source.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = source.c; path = src/source.c; sourceTree = "<group>"; };
- 96BC39BC0F3EBAB100C59689 /* queue_private.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = queue_private.h; path = src/queue_private.h; sourceTree = "<group>"; };
- 96C9553A0F3EAEDD000D2CA4 /* once.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = once.h; path = dispatch/once.h; sourceTree = "<group>"; };
- 96DF70BD0F38FE3C0074BD99 /* once.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = once.c; path = src/once.c; sourceTree = "<group>"; };
- D2AAC046055464E500DB518D /* libdispatch.a */ = {isa = PBXFileReference; explicitFileType = archive.ar; includeInIndex = 0; path = libdispatch.a; sourceTree = BUILT_PRODUCTS_DIR; };
- E4BF990010A89607007655D0 /* time.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = time.c; path = src/shims/time.c; sourceTree = "<group>"; };
- FC0B34780FA2851C0080FFA0 /* source_internal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = source_internal.h; path = src/source_internal.h; sourceTree = "<group>"; };
- FC18329E109923A7003403D5 /* mach.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = mach.c; path = src/shims/mach.c; sourceTree = "<group>"; };
- FC1832A2109923C7003403D5 /* perfmon.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = perfmon.h; path = src/shims/perfmon.h; sourceTree = "<group>"; };
- FC1832A3109923C7003403D5 /* time.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = time.h; path = src/shims/time.h; sourceTree = "<group>"; };
- FC1832A4109923C7003403D5 /* tsd.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = tsd.h; path = src/shims/tsd.h; sourceTree = "<group>"; };
- FC36279C0E933ED80054F1A3 /* dispatch_queue_create.3 */ = {isa = PBXFileReference; explicitFileType = text.man; fileEncoding = 4; name = dispatch_queue_create.3; path = man/dispatch_queue_create.3; sourceTree = "<group>"; };
- FC5C9C1D0EADABE3006E462D /* group.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = group.h; path = dispatch/group.h; sourceTree = "<group>"; };
- FC678DE80F97E0C300AB5993 /* dispatch_after.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; name = dispatch_after.3; path = man/dispatch_after.3; sourceTree = "<group>"; };
- FC678DE90F97E0C300AB5993 /* dispatch_api.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; name = dispatch_api.3; path = man/dispatch_api.3; sourceTree = "<group>"; };
- FC678DEA0F97E0C300AB5993 /* dispatch_async.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; name = dispatch_async.3; path = man/dispatch_async.3; sourceTree = "<group>"; };
- FC678DEB0F97E0C300AB5993 /* dispatch_group_create.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; name = dispatch_group_create.3; path = man/dispatch_group_create.3; sourceTree = "<group>"; };
- FC678DEC0F97E0C300AB5993 /* dispatch_time.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; name = dispatch_time.3; path = man/dispatch_time.3; sourceTree = "<group>"; };
- FC7BED8A0E8361E600161930 /* queue.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; name = queue.c; path = src/queue.c; sourceTree = "<group>"; };
- FC7BED8B0E8361E600161930 /* queue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = queue.h; path = dispatch/queue.h; sourceTree = "<group>"; };
- FC7BED8D0E8361E600161930 /* source.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = source.h; path = dispatch/source.h; sourceTree = "<group>"; };
- FC7BED8F0E8361E600161930 /* internal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = internal.h; path = src/internal.h; sourceTree = "<group>"; };
- FC7BED900E8361E600161930 /* legacy.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = legacy.h; path = src/legacy.h; sourceTree = "<group>"; };
- FC7BED930E8361E600161930 /* private.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = private.h; path = src/private.h; sourceTree = "<group>"; };
- FC7BED950E8361E600161930 /* protocol.defs */ = {isa = PBXFileReference; explicitFileType = sourcecode.mig; fileEncoding = 4; name = protocol.defs; path = src/protocol.defs; sourceTree = "<group>"; };
- FC7BED960E8361E600161930 /* dispatch.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = dispatch.h; path = dispatch/dispatch.h; sourceTree = "<group>"; };
- FC9C70E7105EC9620074F9CA /* config.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = config.h; path = config/config.h; sourceTree = "<group>"; };
- FCEF047F0F5661960067401F /* source_private.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = source_private.h; path = src/source_private.h; sourceTree = "<group>"; };
+ 5A0095A110F274B0000E2A31 /* io_internal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = io_internal.h; sourceTree = "<group>"; };
+ 5A27262510F26F1900751FBC /* io.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; lineEnding = 0; path = io.c; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.c; };
+ 5A5D13AB0F6B280500197CC3 /* semaphore_internal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = semaphore_internal.h; sourceTree = "<group>"; };
+ 5AAB45BF10D30B79004407EA /* data.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; lineEnding = 0; path = data.c; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.c; };
+ 5AAB45C310D30CC7004407EA /* io.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = io.h; sourceTree = "<group>"; tabWidth = 8; };
+ 5AAB45C510D30D0C004407EA /* data.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = data.h; sourceTree = "<group>"; tabWidth = 8; };
+ 721F5C5C0F15520500FF03A6 /* semaphore.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = semaphore.h; sourceTree = "<group>"; };
+ 721F5CCE0F15553500FF03A6 /* semaphore.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; lineEnding = 0; path = semaphore.c; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.c; };
+ 72B54F690EB169EB00DBECBA /* dispatch_source_create.3 */ = {isa = PBXFileReference; explicitFileType = text.man; fileEncoding = 4; path = dispatch_source_create.3; sourceTree = "<group>"; };
+ 72CC940C0ECCD5720031B751 /* dispatch_object.3 */ = {isa = PBXFileReference; explicitFileType = text.man; fileEncoding = 4; path = dispatch_object.3; sourceTree = "<group>"; };
+ 72CC940D0ECCD5720031B751 /* dispatch.3 */ = {isa = PBXFileReference; explicitFileType = text.man; fileEncoding = 4; path = dispatch.3; sourceTree = "<group>"; };
+ 72CC942F0ECCD8750031B751 /* base.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = base.h; sourceTree = "<group>"; };
+ 96032E4A0F5CC8C700241C5F /* time.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = time.c; sourceTree = "<group>"; };
+ 96032E4C0F5CC8D100241C5F /* time.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = time.h; sourceTree = "<group>"; };
+ 960F0E7D0F3FB232000D88BF /* dispatch_apply.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; path = dispatch_apply.3; sourceTree = "<group>"; };
+ 960F0E7E0F3FB232000D88BF /* dispatch_once.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; path = dispatch_once.3; sourceTree = "<group>"; };
+ 961B99350F3E83980006BC96 /* benchmark.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = benchmark.h; sourceTree = "<group>"; };
+ 961B994F0F3E85C30006BC96 /* object.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = object.h; sourceTree = "<group>"; };
+ 963FDDE50F3FB6BD00BF2D00 /* dispatch_semaphore_create.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; path = dispatch_semaphore_create.3; sourceTree = "<group>"; };
+ 965CD6340F3E806200D4E28D /* benchmark.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = benchmark.c; sourceTree = "<group>"; };
+ 965ECC200F3EAB71004DDD89 /* object_internal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = object_internal.h; sourceTree = "<group>"; };
+ 9661E56A0F3E7DDF00749F3E /* object.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; lineEnding = 0; path = object.c; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.c; };
+ 9676A0E00F3E755D00713ADB /* apply.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; lineEnding = 0; path = apply.c; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.c; };
+ 96859A3D0EF71BAD003EB3FB /* dispatch_benchmark.3 */ = {isa = PBXFileReference; explicitFileType = text.man; fileEncoding = 4; path = dispatch_benchmark.3; sourceTree = "<group>"; };
+ 96929D820F3EA1020041FF5D /* atomic.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; path = atomic.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; };
+ 96929D830F3EA1020041FF5D /* shims.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = shims.h; sourceTree = "<group>"; };
+ 96929D950F3EA2170041FF5D /* queue_internal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; path = queue_internal.h; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.objcpp; };
+ 96A8AA860F41E7A400CD570B /* source.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; lineEnding = 0; path = source.c; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.c; };
+ 96BC39BC0F3EBAB100C59689 /* queue_private.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = queue_private.h; sourceTree = "<group>"; };
+ 96C9553A0F3EAEDD000D2CA4 /* once.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = once.h; sourceTree = "<group>"; };
+ 96DF70BD0F38FE3C0074BD99 /* once.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; lineEnding = 0; path = once.c; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.c; };
+ C927F35F10FD7F1000C5AB8B /* ddt.xcodeproj */ = {isa = PBXFileReference; lastKnownFileType = "wrapper.pb-project"; name = ddt.xcodeproj; path = tools/ddt/ddt.xcodeproj; sourceTree = "<group>"; };
+ D2AAC046055464E500DB518D /* libdispatch.dylib */ = {isa = PBXFileReference; explicitFileType = "compiled.mach-o.dylib"; includeInIndex = 0; path = libdispatch.dylib; sourceTree = BUILT_PRODUCTS_DIR; };
+ E40041A9125D70590022B135 /* libdispatch-resolved.xcconfig */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.xcconfig; path = "libdispatch-resolved.xcconfig"; sourceTree = "<group>"; };
+ E40041AA125D705F0022B135 /* libdispatch-resolver.xcconfig */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.xcconfig; path = "libdispatch-resolver.xcconfig"; sourceTree = "<group>"; };
+ E4128ED513BA9A1700ABB2CB /* hw_config.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = hw_config.h; sourceTree = "<group>"; };
+ E422A0D412A557B5005E5BDB /* trace.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = trace.h; sourceTree = "<group>"; };
+ E43570B8126E93380097AB9F /* provider.d */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.dtrace; path = provider.d; sourceTree = "<group>"; };
+ E43D93F11097917E004F6A62 /* libdispatch.xcconfig */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.xcconfig; path = libdispatch.xcconfig; sourceTree = "<group>"; };
+ E44EBE331251654000645D88 /* resolver.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = resolver.h; sourceTree = "<group>"; };
+ E44EBE371251656400645D88 /* resolver.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = resolver.c; sourceTree = "<group>"; };
+ E44EBE3B1251659900645D88 /* init.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; path = init.c; sourceTree = "<group>"; };
+ E47D6BB5125F0F800070D91C /* resolved.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = resolved.h; sourceTree = "<group>"; };
+ E482F1CD12DBAB590030614D /* postprocess-headers.sh */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.script.sh; path = "postprocess-headers.sh"; sourceTree = "<group>"; };
+ E49F24DF125D57FA0057C971 /* libdispatch.dylib */ = {isa = PBXFileReference; explicitFileType = "compiled.mach-o.dylib"; includeInIndex = 0; path = libdispatch.dylib; sourceTree = BUILT_PRODUCTS_DIR; };
+ E49F251C125D629F0057C971 /* symlink-headers.sh */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.script.sh; path = "symlink-headers.sh"; sourceTree = "<group>"; };
+ E49F251D125D630A0057C971 /* install-manpages.sh */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.script.sh; path = "install-manpages.sh"; sourceTree = "<group>"; };
+ E49F251E125D631D0057C971 /* mig-headers.sh */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.script.sh; path = "mig-headers.sh"; sourceTree = "<group>"; };
+ E4BA743513A88FE10095BDF1 /* dispatch_data_create.3 */ = {isa = PBXFileReference; lastKnownFileType = text; path = dispatch_data_create.3; sourceTree = "<group>"; };
+ E4BA743613A88FF30095BDF1 /* dispatch_io_create.3 */ = {isa = PBXFileReference; lastKnownFileType = text; path = dispatch_io_create.3; sourceTree = "<group>"; };
+ E4BA743713A88FF30095BDF1 /* dispatch_io_read.3 */ = {isa = PBXFileReference; lastKnownFileType = text; path = dispatch_io_read.3; sourceTree = "<group>"; };
+ E4BA743813A8900B0095BDF1 /* dispatch_read.3 */ = {isa = PBXFileReference; lastKnownFileType = text; path = dispatch_read.3; sourceTree = "<group>"; };
+ E4BA743913A8911B0095BDF1 /* getprogname.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = getprogname.h; sourceTree = "<group>"; };
+ E4BA743A13A8911B0095BDF1 /* malloc_zone.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = malloc_zone.h; sourceTree = "<group>"; };
+ E4C1ED6E1263E714000D3C8B /* data_internal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = data_internal.h; sourceTree = "<group>"; };
+ E4EC11C312514302000DDBD1 /* libdispatch_up.a */ = {isa = PBXFileReference; explicitFileType = archive.ar; includeInIndex = 0; path = libdispatch_up.a; sourceTree = BUILT_PRODUCTS_DIR; };
+ E4EC122D12514715000DDBD1 /* libdispatch_mp.a */ = {isa = PBXFileReference; explicitFileType = archive.ar; includeInIndex = 0; path = libdispatch_mp.a; sourceTree = BUILT_PRODUCTS_DIR; };
+ FC0B34780FA2851C0080FFA0 /* source_internal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = source_internal.h; sourceTree = "<group>"; };
+ FC1832A2109923C7003403D5 /* perfmon.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = perfmon.h; sourceTree = "<group>"; };
+ FC1832A3109923C7003403D5 /* time.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = time.h; sourceTree = "<group>"; };
+ FC1832A4109923C7003403D5 /* tsd.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = tsd.h; sourceTree = "<group>"; };
+ FC36279C0E933ED80054F1A3 /* dispatch_queue_create.3 */ = {isa = PBXFileReference; explicitFileType = text.man; fileEncoding = 4; path = dispatch_queue_create.3; sourceTree = "<group>"; };
+ FC5C9C1D0EADABE3006E462D /* group.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = group.h; sourceTree = "<group>"; };
+ FC678DE80F97E0C300AB5993 /* dispatch_after.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; path = dispatch_after.3; sourceTree = "<group>"; };
+ FC678DE90F97E0C300AB5993 /* dispatch_api.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; path = dispatch_api.3; sourceTree = "<group>"; };
+ FC678DEA0F97E0C300AB5993 /* dispatch_async.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; path = dispatch_async.3; sourceTree = "<group>"; };
+ FC678DEB0F97E0C300AB5993 /* dispatch_group_create.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; path = dispatch_group_create.3; sourceTree = "<group>"; };
+ FC678DEC0F97E0C300AB5993 /* dispatch_time.3 */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; path = dispatch_time.3; sourceTree = "<group>"; };
+ FC7BED8A0E8361E600161930 /* queue.c */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.c; lineEnding = 0; path = queue.c; sourceTree = "<group>"; xcLanguageSpecificationIdentifier = xcode.lang.c; };
+ FC7BED8B0E8361E600161930 /* queue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = queue.h; sourceTree = "<group>"; };
+ FC7BED8D0E8361E600161930 /* source.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = source.h; sourceTree = "<group>"; };
+ FC7BED8F0E8361E600161930 /* internal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = internal.h; sourceTree = "<group>"; };
+ FC7BED930E8361E600161930 /* private.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = private.h; sourceTree = "<group>"; };
+ FC7BED950E8361E600161930 /* protocol.defs */ = {isa = PBXFileReference; explicitFileType = sourcecode.mig; fileEncoding = 4; path = protocol.defs; sourceTree = "<group>"; };
+ FC7BED960E8361E600161930 /* dispatch.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = dispatch.h; sourceTree = "<group>"; };
+ FC9C70E7105EC9620074F9CA /* config.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = config.h; sourceTree = "<group>"; };
+ FCEF047F0F5661960067401F /* source_private.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = source_private.h; sourceTree = "<group>"; };
/* End PBXFileReference section */
/* Begin PBXFrameworksBuildPhase section */
@@ -108,12 +280,20 @@
);
runOnlyForDeploymentPostprocessing = 0;
};
+ E49F24D5125D57FA0057C971 /* Frameworks */ = {
+ isa = PBXFrameworksBuildPhase;
+ buildActionMask = 2147483647;
+ files = (
+ );
+ runOnlyForDeploymentPostprocessing = 0;
+ };
/* End PBXFrameworksBuildPhase section */
/* Begin PBXGroup section */
08FB7794FE84155DC02AAC07 /* libdispatch */ = {
isa = PBXGroup;
children = (
+ E44DB71E11D2FF080074F2AD /* Build Support */,
FC7BEDAA0E83625200161930 /* Public Headers */,
FC7BEDAF0E83626100161930 /* Private Headers */,
FC7BEDB60E8363DC00161930 /* Project Headers */,
@@ -121,31 +301,40 @@
C6A0FF2B0290797F04C91782 /* Documentation */,
1AB674ADFE9D54B511CA2CBB /* Products */,
);
+ indentWidth = 4;
name = libdispatch;
sourceTree = "<group>";
+ tabWidth = 4;
+ usesTabs = 1;
};
08FB7795FE84155DC02AAC07 /* Source */ = {
isa = PBXGroup;
children = (
9676A0E00F3E755D00713ADB /* apply.c */,
965CD6340F3E806200D4E28D /* benchmark.c */,
+ 5AAB45BF10D30B79004407EA /* data.c */,
+ E44EBE3B1251659900645D88 /* init.c */,
+ 5A27262510F26F1900751FBC /* io.c */,
9661E56A0F3E7DDF00749F3E /* object.c */,
96DF70BD0F38FE3C0074BD99 /* once.c */,
FC7BED8A0E8361E600161930 /* queue.c */,
721F5CCE0F15553500FF03A6 /* semaphore.c */,
96A8AA860F41E7A400CD570B /* source.c */,
96032E4A0F5CC8C700241C5F /* time.c */,
- 2EC9C9B70E8809EF00E2499A /* legacy.c */,
FC7BED950E8361E600161930 /* protocol.defs */,
- FC18329D10992387003403D5 /* shims */,
+ E43570B8126E93380097AB9F /* provider.d */,
);
name = Source;
+ path = src;
sourceTree = "<group>";
};
1AB674ADFE9D54B511CA2CBB /* Products */ = {
isa = PBXGroup;
children = (
- D2AAC046055464E500DB518D /* libdispatch.a */,
+ D2AAC046055464E500DB518D /* libdispatch.dylib */,
+ E4EC11C312514302000DDBD1 /* libdispatch_up.a */,
+ E4EC122D12514715000DDBD1 /* libdispatch_mp.a */,
+ E49F24DF125D57FA0057C971 /* libdispatch.dylib */,
);
name = Products;
sourceTree = "<group>";
@@ -159,43 +348,103 @@
960F0E7D0F3FB232000D88BF /* dispatch_apply.3 */,
FC678DEA0F97E0C300AB5993 /* dispatch_async.3 */,
96859A3D0EF71BAD003EB3FB /* dispatch_benchmark.3 */,
+ E4BA743513A88FE10095BDF1 /* dispatch_data_create.3 */,
FC678DEB0F97E0C300AB5993 /* dispatch_group_create.3 */,
+ E4BA743613A88FF30095BDF1 /* dispatch_io_create.3 */,
+ E4BA743713A88FF30095BDF1 /* dispatch_io_read.3 */,
72CC940C0ECCD5720031B751 /* dispatch_object.3 */,
960F0E7E0F3FB232000D88BF /* dispatch_once.3 */,
FC36279C0E933ED80054F1A3 /* dispatch_queue_create.3 */,
+ E4BA743813A8900B0095BDF1 /* dispatch_read.3 */,
963FDDE50F3FB6BD00BF2D00 /* dispatch_semaphore_create.3 */,
72B54F690EB169EB00DBECBA /* dispatch_source_create.3 */,
FC678DEC0F97E0C300AB5993 /* dispatch_time.3 */,
);
name = Documentation;
+ path = man;
sourceTree = "<group>";
};
- FC18329D10992387003403D5 /* shims */ = {
+ C927F36010FD7F1000C5AB8B /* Products */ = {
isa = PBXGroup;
children = (
- FC18329E109923A7003403D5 /* mach.c */,
- E4BF990010A89607007655D0 /* time.c */,
+ C927F36710FD7F1000C5AB8B /* ddt */,
);
- name = shims;
+ name = Products;
+ sourceTree = "<group>";
+ };
+ E40041E4125E71150022B135 /* xcodeconfig */ = {
+ isa = PBXGroup;
+ children = (
+ E43D93F11097917E004F6A62 /* libdispatch.xcconfig */,
+ E40041AA125D705F0022B135 /* libdispatch-resolver.xcconfig */,
+ E40041A9125D70590022B135 /* libdispatch-resolved.xcconfig */,
+ );
+ path = xcodeconfig;
+ sourceTree = "<group>";
+ };
+ E44DB71E11D2FF080074F2AD /* Build Support */ = {
+ isa = PBXGroup;
+ children = (
+ E4BA743413A88D390095BDF1 /* config */,
+ E40041E4125E71150022B135 /* xcodeconfig */,
+ E49F259C125D664F0057C971 /* xcodescripts */,
+ E47D6BCA125F10F70070D91C /* resolver */,
+ C927F35F10FD7F1000C5AB8B /* ddt.xcodeproj */,
+ );
+ name = "Build Support";
+ sourceTree = "<group>";
+ };
+ E47D6BCA125F10F70070D91C /* resolver */ = {
+ isa = PBXGroup;
+ children = (
+ E47D6BB5125F0F800070D91C /* resolved.h */,
+ E44EBE371251656400645D88 /* resolver.c */,
+ E44EBE331251654000645D88 /* resolver.h */,
+ );
+ path = resolver;
+ sourceTree = "<group>";
+ };
+ E49F259C125D664F0057C971 /* xcodescripts */ = {
+ isa = PBXGroup;
+ children = (
+ E49F251D125D630A0057C971 /* install-manpages.sh */,
+ E49F251E125D631D0057C971 /* mig-headers.sh */,
+ E482F1CD12DBAB590030614D /* postprocess-headers.sh */,
+ E49F251C125D629F0057C971 /* symlink-headers.sh */,
+ );
+ path = xcodescripts;
+ sourceTree = "<group>";
+ };
+ E4BA743413A88D390095BDF1 /* config */ = {
+ isa = PBXGroup;
+ children = (
+ FC9C70E7105EC9620074F9CA /* config.h */,
+ );
+ path = config;
sourceTree = "<group>";
};
FC1832A0109923B3003403D5 /* shims */ = {
isa = PBXGroup;
children = (
+ 96929D820F3EA1020041FF5D /* atomic.h */,
+ E4BA743913A8911B0095BDF1 /* getprogname.h */,
+ E4128ED513BA9A1700ABB2CB /* hw_config.h */,
+ E4BA743A13A8911B0095BDF1 /* malloc_zone.h */,
FC1832A2109923C7003403D5 /* perfmon.h */,
FC1832A3109923C7003403D5 /* time.h */,
FC1832A4109923C7003403D5 /* tsd.h */,
);
- name = shims;
+ path = shims;
sourceTree = "<group>";
};
FC7BEDAA0E83625200161930 /* Public Headers */ = {
isa = PBXGroup;
children = (
72CC942F0ECCD8750031B751 /* base.h */,
- 961B99350F3E83980006BC96 /* benchmark.h */,
+ 5AAB45C510D30D0C004407EA /* data.h */,
FC7BED960E8361E600161930 /* dispatch.h */,
FC5C9C1D0EADABE3006E462D /* group.h */,
+ 5AAB45C310D30CC7004407EA /* io.h */,
961B994F0F3E85C30006BC96 /* object.h */,
96C9553A0F3EAEDD000D2CA4 /* once.h */,
FC7BED8B0E8361E600161930 /* queue.h */,
@@ -204,6 +453,7 @@
96032E4C0F5CC8D100241C5F /* time.h */,
);
name = "Public Headers";
+ path = dispatch;
sourceTree = "<group>";
};
FC7BEDAF0E83626100161930 /* Private Headers */ = {
@@ -212,25 +462,28 @@
FC7BED930E8361E600161930 /* private.h */,
96BC39BC0F3EBAB100C59689 /* queue_private.h */,
FCEF047F0F5661960067401F /* source_private.h */,
- FC7BED900E8361E600161930 /* legacy.h */,
+ 961B99350F3E83980006BC96 /* benchmark.h */,
);
name = "Private Headers";
+ path = private;
sourceTree = "<group>";
};
FC7BEDB60E8363DC00161930 /* Project Headers */ = {
isa = PBXGroup;
children = (
- FC9C70E7105EC9620074F9CA /* config.h */,
- 96929D820F3EA1020041FF5D /* hw_shims.h */,
FC7BED8F0E8361E600161930 /* internal.h */,
+ E4C1ED6E1263E714000D3C8B /* data_internal.h */,
+ 5A0095A110F274B0000E2A31 /* io_internal.h */,
965ECC200F3EAB71004DDD89 /* object_internal.h */,
- 96929D830F3EA1020041FF5D /* os_shims.h */,
96929D950F3EA2170041FF5D /* queue_internal.h */,
5A5D13AB0F6B280500197CC3 /* semaphore_internal.h */,
FC0B34780FA2851C0080FFA0 /* source_internal.h */,
+ E422A0D412A557B5005E5BDB /* trace.h */,
+ 96929D830F3EA1020041FF5D /* shims.h */,
FC1832A0109923B3003403D5 /* shims */,
);
name = "Project Headers";
+ path = src;
sourceTree = "<group>";
};
/* End PBXGroup section */
@@ -240,67 +493,82 @@
isa = PBXHeadersBuildPhase;
buildActionMask = 2147483647;
files = (
- 72CC94300ECCD8750031B751 /* base.h in Headers */,
FC7BEDA50E8361E600161930 /* dispatch.h in Headers */,
+ 72CC94300ECCD8750031B751 /* base.h in Headers */,
+ 961B99500F3E85C30006BC96 /* object.h in Headers */,
FC7BED9A0E8361E600161930 /* queue.h in Headers */,
FC7BED9C0E8361E600161930 /* source.h in Headers */,
- FC5C9C1E0EADABE3006E462D /* group.h in Headers */,
- FC7BEDA20E8361E600161930 /* private.h in Headers */,
- FC7BED9F0E8361E600161930 /* legacy.h in Headers */,
- FC7BED9E0E8361E600161930 /* internal.h in Headers */,
721F5C5D0F15520500FF03A6 /* semaphore.h in Headers */,
- 961B99360F3E83980006BC96 /* benchmark.h in Headers */,
- 961B99500F3E85C30006BC96 /* object.h in Headers */,
- 96929D840F3EA1020041FF5D /* hw_shims.h in Headers */,
- 96929D850F3EA1020041FF5D /* os_shims.h in Headers */,
- 96929D960F3EA2170041FF5D /* queue_internal.h in Headers */,
- 965ECC210F3EAB71004DDD89 /* object_internal.h in Headers */,
+ FC5C9C1E0EADABE3006E462D /* group.h in Headers */,
96C9553B0F3EAEDD000D2CA4 /* once.h in Headers */,
+ 5AAB45C410D30CC7004407EA /* io.h in Headers */,
+ 5AAB45C610D30D0C004407EA /* data.h in Headers */,
+ 96032E4D0F5CC8D100241C5F /* time.h in Headers */,
+ FC7BEDA20E8361E600161930 /* private.h in Headers */,
96BC39BD0F3EBAB100C59689 /* queue_private.h in Headers */,
FCEF04800F5661960067401F /* source_private.h in Headers */,
- 96032E4D0F5CC8D100241C5F /* time.h in Headers */,
- 5A5D13AC0F6B280500197CC3 /* semaphore_internal.h in Headers */,
+ 961B99360F3E83980006BC96 /* benchmark.h in Headers */,
+ FC7BED9E0E8361E600161930 /* internal.h in Headers */,
+ 965ECC210F3EAB71004DDD89 /* object_internal.h in Headers */,
+ 96929D960F3EA2170041FF5D /* queue_internal.h in Headers */,
FC0B34790FA2851C0080FFA0 /* source_internal.h in Headers */,
- FC9C70E8105EC9620074F9CA /* config.h in Headers */,
- FC1832A6109923C7003403D5 /* perfmon.h in Headers */,
- FC1832A7109923C7003403D5 /* time.h in Headers */,
+ 5A5D13AC0F6B280500197CC3 /* semaphore_internal.h in Headers */,
+ E4C1ED6F1263E714000D3C8B /* data_internal.h in Headers */,
+ 5A0095A210F274B0000E2A31 /* io_internal.h in Headers */,
FC1832A8109923C7003403D5 /* tsd.h in Headers */,
+ 96929D840F3EA1020041FF5D /* atomic.h in Headers */,
+ 96929D850F3EA1020041FF5D /* shims.h in Headers */,
+ FC1832A7109923C7003403D5 /* time.h in Headers */,
+ FC1832A6109923C7003403D5 /* perfmon.h in Headers */,
+ FC9C70E8105EC9620074F9CA /* config.h in Headers */,
+ E422A0D512A557B5005E5BDB /* trace.h in Headers */,
+ E4BA743B13A8911B0095BDF1 /* getprogname.h in Headers */,
+ E4BA743F13A8911B0095BDF1 /* malloc_zone.h in Headers */,
+ E4128ED613BA9A1700ABB2CB /* hw_config.h in Headers */,
+ );
+ runOnlyForDeploymentPostprocessing = 0;
+ };
+ E49F24AA125D57FA0057C971 /* Headers */ = {
+ isa = PBXHeadersBuildPhase;
+ buildActionMask = 2147483647;
+ files = (
+ E49F24AB125D57FA0057C971 /* dispatch.h in Headers */,
+ E49F24AC125D57FA0057C971 /* base.h in Headers */,
+ E49F24AD125D57FA0057C971 /* object.h in Headers */,
+ E49F24AE125D57FA0057C971 /* queue.h in Headers */,
+ E49F24AF125D57FA0057C971 /* source.h in Headers */,
+ E49F24B0125D57FA0057C971 /* semaphore.h in Headers */,
+ E49F24B1125D57FA0057C971 /* group.h in Headers */,
+ E49F24B2125D57FA0057C971 /* once.h in Headers */,
+ E49F24B3125D57FA0057C971 /* io.h in Headers */,
+ E49F24B4125D57FA0057C971 /* data.h in Headers */,
+ E49F24B5125D57FA0057C971 /* time.h in Headers */,
+ E49F24B6125D57FA0057C971 /* private.h in Headers */,
+ E49F24B7125D57FA0057C971 /* queue_private.h in Headers */,
+ E49F24B8125D57FA0057C971 /* source_private.h in Headers */,
+ E49F24B9125D57FA0057C971 /* benchmark.h in Headers */,
+ E49F24BA125D57FA0057C971 /* internal.h in Headers */,
+ E49F24BC125D57FA0057C971 /* object_internal.h in Headers */,
+ E49F24BB125D57FA0057C971 /* queue_internal.h in Headers */,
+ E49F24BE125D57FA0057C971 /* source_internal.h in Headers */,
+ E49F24BD125D57FA0057C971 /* semaphore_internal.h in Headers */,
+ E4C1ED701263E714000D3C8B /* data_internal.h in Headers */,
+ E49F24BF125D57FA0057C971 /* io_internal.h in Headers */,
+ E49F24C1125D57FA0057C971 /* tsd.h in Headers */,
+ E49F24C2125D57FA0057C971 /* atomic.h in Headers */,
+ E49F24C3125D57FA0057C971 /* shims.h in Headers */,
+ E49F24C4125D57FA0057C971 /* time.h in Headers */,
+ E49F24C5125D57FA0057C971 /* perfmon.h in Headers */,
+ E49F24C6125D57FA0057C971 /* config.h in Headers */,
+ E422A0D612A557B5005E5BDB /* trace.h in Headers */,
+ E4BA743C13A8911B0095BDF1 /* getprogname.h in Headers */,
+ E4BA744013A8911B0095BDF1 /* malloc_zone.h in Headers */,
+ E4128ED713BA9A1700ABB2CB /* hw_config.h in Headers */,
);
runOnlyForDeploymentPostprocessing = 0;
};
/* End PBXHeadersBuildPhase section */
-/* Begin PBXLegacyTarget section */
- 721EB4790F69D26F00845379 /* testbots */ = {
- isa = PBXLegacyTarget;
- buildArgumentsString = testbots;
- buildConfigurationList = 721EB4850F69D2A600845379 /* Build configuration list for PBXLegacyTarget "testbots" */;
- buildPhases = (
- );
- buildToolPath = /usr/bin/make;
- buildWorkingDirectory = testing;
- dependencies = (
- );
- name = testbots;
- passBuildSettingsInEnvironment = 0;
- productName = testbots;
- };
- 7276FCBA0EB10E0F00F7F487 /* test */ = {
- isa = PBXLegacyTarget;
- buildArgumentsString = test;
- buildConfigurationList = 7276FCC80EB10E2300F7F487 /* Build configuration list for PBXLegacyTarget "test" */;
- buildPhases = (
- );
- buildToolPath = /usr/bin/make;
- buildWorkingDirectory = testing;
- dependencies = (
- );
- name = test;
- passBuildSettingsInEnvironment = 0;
- productName = test;
- };
-/* End PBXLegacyTarget section */
-
/* Begin PBXNativeTarget section */
D2AAC045055464E500DB518D /* libdispatch */ = {
isa = PBXNativeTarget;
@@ -309,16 +577,73 @@
D2AAC043055464E500DB518D /* Headers */,
D2AAC044055464E500DB518D /* Sources */,
D289987405E68DCB004EDB86 /* Frameworks */,
- 2EC9C9800E846B5200E2499A /* ShellScript */,
- 4CED8B9D0EEDF8B600AF99AB /* ShellScript */,
+ E482F1C512DBAA110030614D /* Postprocess Headers */,
+ 2EC9C9800E846B5200E2499A /* Symlink Headers */,
+ 4CED8B9D0EEDF8B600AF99AB /* Install Manpages */,
+ );
+ buildRules = (
+ );
+ dependencies = (
+ E47D6ECB125FEB9D0070D91C /* PBXTargetDependency */,
+ E47D6ECD125FEBA10070D91C /* PBXTargetDependency */,
+ );
+ name = libdispatch;
+ productName = libdispatch;
+ productReference = D2AAC046055464E500DB518D /* libdispatch.dylib */;
+ productType = "com.apple.product-type.library.dynamic";
+ };
+ E49F24A9125D57FA0057C971 /* libdispatch no resolver */ = {
+ isa = PBXNativeTarget;
+ buildConfigurationList = E49F24D8125D57FA0057C971 /* Build configuration list for PBXNativeTarget "libdispatch no resolver" */;
+ buildPhases = (
+ E49F24AA125D57FA0057C971 /* Headers */,
+ E49F24C7125D57FA0057C971 /* Sources */,
+ E49F24D5125D57FA0057C971 /* Frameworks */,
+ E4128EB213B9612700ABB2CB /* Postprocess Headers */,
+ E49F24D6125D57FA0057C971 /* Symlink Headers */,
+ E49F24D7125D57FA0057C971 /* Install Manpages */,
);
buildRules = (
);
dependencies = (
);
- name = libdispatch;
+ name = "libdispatch no resolver";
productName = libdispatch;
- productReference = D2AAC046055464E500DB518D /* libdispatch.a */;
+ productReference = E49F24DF125D57FA0057C971 /* libdispatch.dylib */;
+ productType = "com.apple.product-type.library.dynamic";
+ };
+ E4EC118F12514302000DDBD1 /* libdispatch up resolved */ = {
+ isa = PBXNativeTarget;
+ buildConfigurationList = E4EC11BC12514302000DDBD1 /* Build configuration list for PBXNativeTarget "libdispatch up resolved" */;
+ buildPhases = (
+ E4EC12141251461A000DDBD1 /* Mig Headers */,
+ E4EC11AC12514302000DDBD1 /* Sources */,
+ E4EC121212514613000DDBD1 /* Symlink normal variant */,
+ );
+ buildRules = (
+ );
+ dependencies = (
+ );
+ name = "libdispatch up resolved";
+ productName = libdispatch;
+ productReference = E4EC11C312514302000DDBD1 /* libdispatch_up.a */;
+ productType = "com.apple.product-type.library.static";
+ };
+ E4EC121612514715000DDBD1 /* libdispatch mp resolved */ = {
+ isa = PBXNativeTarget;
+ buildConfigurationList = E4EC122612514715000DDBD1 /* Build configuration list for PBXNativeTarget "libdispatch mp resolved" */;
+ buildPhases = (
+ E4EC121712514715000DDBD1 /* Mig Headers */,
+ E4EC121812514715000DDBD1 /* Sources */,
+ E4EC122512514715000DDBD1 /* Symlink normal variant */,
+ );
+ buildRules = (
+ );
+ dependencies = (
+ );
+ name = "libdispatch mp resolved";
+ productName = libdispatch;
+ productReference = E4EC122D12514715000DDBD1 /* libdispatch_mp.a */;
productType = "com.apple.product-type.library.static";
};
/* End PBXNativeTarget section */
@@ -326,8 +651,12 @@
/* Begin PBXProject section */
08FB7793FE84155DC02AAC07 /* Project object */ = {
isa = PBXProject;
+ attributes = {
+ BuildIndependentTargetsInParallel = YES;
+ LastUpgradeCheck = 0420;
+ };
buildConfigurationList = 1DEB91EF08733DB70010E9CD /* Build configuration list for PBXProject "libdispatch" */;
- compatibilityVersion = "Xcode 3.1";
+ compatibilityVersion = "Xcode 3.2";
developmentRegion = English;
hasScannedForEncodings = 1;
knownRegions = (
@@ -338,41 +667,200 @@
);
mainGroup = 08FB7794FE84155DC02AAC07 /* libdispatch */;
projectDirPath = "";
+ projectReferences = (
+ {
+ ProductGroup = C927F36010FD7F1000C5AB8B /* Products */;
+ ProjectRef = C927F35F10FD7F1000C5AB8B /* ddt.xcodeproj */;
+ },
+ );
projectRoot = "";
targets = (
D2AAC045055464E500DB518D /* libdispatch */,
- 7276FCBA0EB10E0F00F7F487 /* test */,
- 721EB4790F69D26F00845379 /* testbots */,
+ E49F24A9125D57FA0057C971 /* libdispatch no resolver */,
+ E4EC118F12514302000DDBD1 /* libdispatch up resolved */,
+ E4EC121612514715000DDBD1 /* libdispatch mp resolved */,
+ 3F3C9326128E637B0042B1F7 /* libdispatch_Sim */,
+ C927F35A10FD7F0600C5AB8B /* libdispatch_tools */,
);
};
/* End PBXProject section */
-/* Begin PBXShellScriptBuildPhase section */
- 2EC9C9800E846B5200E2499A /* ShellScript */ = {
- isa = PBXShellScriptBuildPhase;
- buildActionMask = 8;
- files = (
- );
- inputPaths = (
- );
- outputPaths = (
- );
- runOnlyForDeploymentPostprocessing = 1;
- shellPath = /bin/sh;
- shellScript = "# private.h supersedes dispatch.h where available\nmv \"$DSTROOT\"/usr/local/include/dispatch/private.h \"$DSTROOT\"/usr/local/include/dispatch/dispatch.h\nln -sf dispatch.h \"$DSTROOT\"/usr/local/include/dispatch/private.h\n\n# keep events.h around for a little while\nln -sf ../../../include/dispatch/source.h \"$DSTROOT\"/usr/local/include/dispatch/events.h";
+/* Begin PBXReferenceProxy section */
+ C927F36710FD7F1000C5AB8B /* ddt */ = {
+ isa = PBXReferenceProxy;
+ fileType = "compiled.mach-o.executable";
+ path = ddt;
+ remoteRef = C927F36610FD7F1000C5AB8B /* PBXContainerItemProxy */;
+ sourceTree = BUILT_PRODUCTS_DIR;
};
- 4CED8B9D0EEDF8B600AF99AB /* ShellScript */ = {
+/* End PBXReferenceProxy section */
+
+/* Begin PBXShellScriptBuildPhase section */
+ 2EC9C9800E846B5200E2499A /* Symlink Headers */ = {
+ isa = PBXShellScriptBuildPhase;
+ buildActionMask = 12;
+ files = (
+ );
+ inputPaths = (
+ "$(SRCROOT)/xcodescripts/symlink-headers.sh",
+ );
+ name = "Symlink Headers";
+ outputPaths = (
+ );
+ runOnlyForDeploymentPostprocessing = 0;
+ shellPath = "/bin/bash -e";
+ shellScript = ". \"${SCRIPT_INPUT_FILE_0}\"";
+ showEnvVarsInLog = 0;
+ };
+ 4CED8B9D0EEDF8B600AF99AB /* Install Manpages */ = {
isa = PBXShellScriptBuildPhase;
buildActionMask = 8;
files = (
);
inputPaths = (
+ "$(SRCROOT)/xcodescripts/install-manpages.sh",
);
+ name = "Install Manpages";
outputPaths = (
);
runOnlyForDeploymentPostprocessing = 1;
- shellPath = /bin/sh;
- shellScript = "#!/bin/sh\n\nmkdir -p $DSTROOT/usr/share/man/man3 || true\nmkdir -p $DSTROOT/usr/local/share/man/man3 || true\n\n# Copy man pages\ncd $SRCROOT/man\nBASE_PAGES=\"dispatch.3 dispatch_after.3 dispatch_api.3 dispatch_apply.3 dispatch_async.3 dispatch_group_create.3 dispatch_object.3 dispatch_once.3 dispatch_queue_create.3 dispatch_semaphore_create.3 dispatch_source_create.3 dispatch_time.3\"\n\nPRIVATE_PAGES=\"dispatch_benchmark.3\"\n\ncp ${BASE_PAGES} $DSTROOT/usr/share/man/man3\ncp ${PRIVATE_PAGES} $DSTROOT/usr/local/share/man/man3\n\n# Make hard links (lots of hard links)\n\ncd $DSTROOT/usr/local/share/man/man3\nln -f dispatch_benchmark.3 dispatch_benchmark_f.3\nchown ${INSTALL_OWNER}:${INSTALL_GROUP} $PRIVATE_PAGES\nchmod $INSTALL_MODE_FLAG $PRIVATE_PAGES\n\n\ncd $DSTROOT/usr/share/man/man3\n\nchown ${INSTALL_OWNER}:${INSTALL_GROUP} $BASE_PAGES\nchmod $INSTALL_MODE_FLAG $BASE_PAGES\n\nln -f dispatch_after.3 dispatch_after_f.3\nln -f dispatch_apply.3 dispatch_apply_f.3\nln -f dispatch_once.3 dispatch_once_f.3\n\nfor m in dispatch_async_f dispatch_sync dispatch_sync_f; do\n\tln -f dispatch_async.3 ${m}.3\ndone\n\nfor m in dispatch_group_enter dispatch_group_leave dispatch_group_wait dispatch_group_async dispatch_group_async_f dispatch_group_notify dispatch_group_notify_f; do\n\tln -f dispatch_group_create.3 ${m}.3\ndone\n\nfor m in dispatch_retain dispatch_release dispatch_suspend dispatch_resume dispatch_get_context dispatch_set_context dispatch_set_finalizer_f; do\n\tln -f dispatch_object.3 ${m}.3\ndone\n\nfor m in dispatch_semaphore_signal dispatch_semaphore_wait; do\n\tln -f dispatch_semaphore_create.3 ${m}.3\ndone\n\nfor m in dispatch_get_current_queue dispatch_main dispatch_get_main_queue dispatch_get_global_queue dispatch_queue_get_label dispatch_set_target_queue; do\n\tln -f dispatch_queue_create.3 ${m}.3\ndone\n\nfor m in dispatch_source_set_event_handler dispatch_source_set_event_handler_f dispatch_source_set_cancel_handler dispatch_source_set_cancel_handler_f dispatch_source_cancel dispatch_source_testcancel dispatch_source_get_handle dispatch_source_get_mask dispatch_source_get_data dispatch_source_merge_data dispatch_source_set_timer; do\n\tln -f dispatch_source_create.3 ${m}.3\ndone\n\nln -f dispatch_time.3 dispatch_walltime.3";
+ shellPath = "/bin/bash -e";
+ shellScript = ". \"${SCRIPT_INPUT_FILE_0}\"";
+ showEnvVarsInLog = 0;
+ };
+ E4128EB213B9612700ABB2CB /* Postprocess Headers */ = {
+ isa = PBXShellScriptBuildPhase;
+ buildActionMask = 8;
+ files = (
+ );
+ inputPaths = (
+ "$(SRCROOT)/xcodescripts/postprocess-headers.sh",
+ );
+ name = "Postprocess Headers";
+ outputPaths = (
+ );
+ runOnlyForDeploymentPostprocessing = 1;
+ shellPath = "/bin/bash -e";
+ shellScript = ". \"${SCRIPT_INPUT_FILE_0}\" ";
+ showEnvVarsInLog = 0;
+ };
+ E482F1C512DBAA110030614D /* Postprocess Headers */ = {
+ isa = PBXShellScriptBuildPhase;
+ buildActionMask = 8;
+ files = (
+ );
+ inputPaths = (
+ "$(SRCROOT)/xcodescripts/postprocess-headers.sh",
+ );
+ name = "Postprocess Headers";
+ outputPaths = (
+ );
+ runOnlyForDeploymentPostprocessing = 1;
+ shellPath = "/bin/bash -e";
+ shellScript = ". \"${SCRIPT_INPUT_FILE_0}\"";
+ showEnvVarsInLog = 0;
+ };
+ E49F24D6125D57FA0057C971 /* Symlink Headers */ = {
+ isa = PBXShellScriptBuildPhase;
+ buildActionMask = 12;
+ files = (
+ );
+ inputPaths = (
+ "$(SRCROOT)/xcodescripts/symlink-headers.sh",
+ );
+ name = "Symlink Headers";
+ outputPaths = (
+ );
+ runOnlyForDeploymentPostprocessing = 0;
+ shellPath = "/bin/bash -e";
+ shellScript = ". \"${SCRIPT_INPUT_FILE_0}\"";
+ showEnvVarsInLog = 0;
+ };
+ E49F24D7125D57FA0057C971 /* Install Manpages */ = {
+ isa = PBXShellScriptBuildPhase;
+ buildActionMask = 8;
+ files = (
+ );
+ inputPaths = (
+ "$(SRCROOT)/xcodescripts/install-manpages.sh",
+ );
+ name = "Install Manpages";
+ outputPaths = (
+ );
+ runOnlyForDeploymentPostprocessing = 1;
+ shellPath = "/bin/bash -e";
+ shellScript = ". \"${SCRIPT_INPUT_FILE_0}\"";
+ showEnvVarsInLog = 0;
+ };
+ E4EC121212514613000DDBD1 /* Symlink normal variant */ = {
+ isa = PBXShellScriptBuildPhase;
+ buildActionMask = 2147483647;
+ files = (
+ );
+ inputPaths = (
+ );
+ name = "Symlink normal variant";
+ outputPaths = (
+ "$(CONFIGURATION_BUILD_DIR)/$(PRODUCT_NAME)_normal.a",
+ );
+ runOnlyForDeploymentPostprocessing = 0;
+ shellPath = "/bin/bash -e";
+ shellScript = "ln -fs \"${PRODUCT_NAME}.a\" \"${SCRIPT_OUTPUT_FILE_0}\"";
+ showEnvVarsInLog = 0;
+ };
+ E4EC12141251461A000DDBD1 /* Mig Headers */ = {
+ isa = PBXShellScriptBuildPhase;
+ buildActionMask = 2147483647;
+ files = (
+ );
+ inputPaths = (
+ "$(SRCROOT)/src/protocol.defs",
+ "$(SRCROOT)/xcodescripts/mig-headers.sh",
+ );
+ name = "Mig Headers";
+ outputPaths = (
+ "$(DERIVED_FILE_DIR)/protocol.h",
+ "$(DERIVED_FILE_DIR)/protocolServer.h",
+ );
+ runOnlyForDeploymentPostprocessing = 0;
+ shellPath = "/bin/bash -e";
+ shellScript = ". \"${SCRIPT_INPUT_FILE_1}\"";
+ showEnvVarsInLog = 0;
+ };
+ E4EC121712514715000DDBD1 /* Mig Headers */ = {
+ isa = PBXShellScriptBuildPhase;
+ buildActionMask = 2147483647;
+ files = (
+ );
+ inputPaths = (
+ "$(SRCROOT)/src/protocol.defs",
+ "$(SRCROOT)/xcodescripts/mig-headers.sh",
+ );
+ name = "Mig Headers";
+ outputPaths = (
+ "$(DERIVED_FILE_DIR)/protocol.h",
+ "$(DERIVED_FILE_DIR)/protocolServer.h",
+ );
+ runOnlyForDeploymentPostprocessing = 0;
+ shellPath = "/bin/bash -e";
+ shellScript = ". \"${SCRIPT_INPUT_FILE_1}\"";
+ showEnvVarsInLog = 0;
+ };
+ E4EC122512514715000DDBD1 /* Symlink normal variant */ = {
+ isa = PBXShellScriptBuildPhase;
+ buildActionMask = 2147483647;
+ files = (
+ );
+ inputPaths = (
+ );
+ name = "Symlink normal variant";
+ outputPaths = (
+ "$(CONFIGURATION_BUILD_DIR)/$(PRODUCT_NAME)_normal.a",
+ );
+ runOnlyForDeploymentPostprocessing = 0;
+ shellPath = "/bin/bash -e";
+ shellScript = "ln -fs \"${PRODUCT_NAME}.a\" \"${SCRIPT_OUTPUT_FILE_0}\"";
+ showEnvVarsInLog = 0;
};
/* End PBXShellScriptBuildPhase section */
@@ -381,9 +869,11 @@
isa = PBXSourcesBuildPhase;
buildActionMask = 2147483647;
files = (
+ E43570B9126E93380097AB9F /* provider.d in Sources */,
FC7BEDA40E8361E600161930 /* protocol.defs in Sources */,
+ E49F2499125D48D80057C971 /* resolver.c in Sources */,
+ E44EBE3E1251659900645D88 /* init.c in Sources */,
FC7BED990E8361E600161930 /* queue.c in Sources */,
- 2EC9C9B80E8809EF00E2499A /* legacy.c in Sources */,
721F5CCF0F15553500FF03A6 /* semaphore.c in Sources */,
96DF70BE0F38FE3C0074BD99 /* once.c in Sources */,
9676A0E10F3E755D00713ADB /* apply.c in Sources */,
@@ -391,121 +881,198 @@
965CD6350F3E806200D4E28D /* benchmark.c in Sources */,
96A8AA870F41E7A400CD570B /* source.c in Sources */,
96032E4B0F5CC8C700241C5F /* time.c in Sources */,
- FC18329F109923A7003403D5 /* mach.c in Sources */,
- E4BF990110A89607007655D0 /* time.c in Sources */,
+ 5AAB45C010D30B79004407EA /* data.c in Sources */,
+ 5A27262610F26F1900751FBC /* io.c in Sources */,
+ );
+ runOnlyForDeploymentPostprocessing = 0;
+ };
+ E49F24C7125D57FA0057C971 /* Sources */ = {
+ isa = PBXSourcesBuildPhase;
+ buildActionMask = 2147483647;
+ files = (
+ E43570BA126E93380097AB9F /* provider.d in Sources */,
+ E49F24C8125D57FA0057C971 /* protocol.defs in Sources */,
+ E49F24C9125D57FA0057C971 /* resolver.c in Sources */,
+ E49F24CA125D57FA0057C971 /* init.c in Sources */,
+ E49F24CB125D57FA0057C971 /* queue.c in Sources */,
+ E49F24CC125D57FA0057C971 /* semaphore.c in Sources */,
+ E49F24CD125D57FA0057C971 /* once.c in Sources */,
+ E49F24CE125D57FA0057C971 /* apply.c in Sources */,
+ E49F24CF125D57FA0057C971 /* object.c in Sources */,
+ E49F24D0125D57FA0057C971 /* benchmark.c in Sources */,
+ E49F24D1125D57FA0057C971 /* source.c in Sources */,
+ E49F24D2125D57FA0057C971 /* time.c in Sources */,
+ E49F24D3125D57FA0057C971 /* data.c in Sources */,
+ E49F24D4125D57FA0057C971 /* io.c in Sources */,
+ );
+ runOnlyForDeploymentPostprocessing = 0;
+ };
+ E4EC11AC12514302000DDBD1 /* Sources */ = {
+ isa = PBXSourcesBuildPhase;
+ buildActionMask = 2147483647;
+ files = (
+ E417A38412A472C4004D659D /* provider.d in Sources */,
+ E44EBE5412517EBE00645D88 /* protocol.defs in Sources */,
+ E49F2424125D3C970057C971 /* resolver.c in Sources */,
+ E44EBE5512517EBE00645D88 /* init.c in Sources */,
+ E4EC11AE12514302000DDBD1 /* queue.c in Sources */,
+ E4EC11AF12514302000DDBD1 /* semaphore.c in Sources */,
+ E4EC11B012514302000DDBD1 /* once.c in Sources */,
+ E4EC11B112514302000DDBD1 /* apply.c in Sources */,
+ E4EC11B212514302000DDBD1 /* object.c in Sources */,
+ E4EC11B312514302000DDBD1 /* benchmark.c in Sources */,
+ E4EC11B412514302000DDBD1 /* source.c in Sources */,
+ E4EC11B512514302000DDBD1 /* time.c in Sources */,
+ E4EC11B712514302000DDBD1 /* data.c in Sources */,
+ E4EC11B812514302000DDBD1 /* io.c in Sources */,
+ );
+ runOnlyForDeploymentPostprocessing = 0;
+ };
+ E4EC121812514715000DDBD1 /* Sources */ = {
+ isa = PBXSourcesBuildPhase;
+ buildActionMask = 2147483647;
+ files = (
+ E417A38512A472C5004D659D /* provider.d in Sources */,
+ E44EBE5612517EBE00645D88 /* protocol.defs in Sources */,
+ E49F2423125D3C960057C971 /* resolver.c in Sources */,
+ E44EBE5712517EBE00645D88 /* init.c in Sources */,
+ E4EC121A12514715000DDBD1 /* queue.c in Sources */,
+ E4EC121B12514715000DDBD1 /* semaphore.c in Sources */,
+ E4EC121C12514715000DDBD1 /* once.c in Sources */,
+ E4EC121D12514715000DDBD1 /* apply.c in Sources */,
+ E4EC121E12514715000DDBD1 /* object.c in Sources */,
+ E4EC121F12514715000DDBD1 /* benchmark.c in Sources */,
+ E4EC122012514715000DDBD1 /* source.c in Sources */,
+ E4EC122112514715000DDBD1 /* time.c in Sources */,
+ E4EC122312514715000DDBD1 /* data.c in Sources */,
+ E4EC122412514715000DDBD1 /* io.c in Sources */,
);
runOnlyForDeploymentPostprocessing = 0;
};
/* End PBXSourcesBuildPhase section */
+/* Begin PBXTargetDependency section */
+ C927F36910FD7F1A00C5AB8B /* PBXTargetDependency */ = {
+ isa = PBXTargetDependency;
+ name = ddt;
+ targetProxy = C927F36810FD7F1A00C5AB8B /* PBXContainerItemProxy */;
+ };
+ E4128E4A13B94BCE00ABB2CB /* PBXTargetDependency */ = {
+ isa = PBXTargetDependency;
+ target = D2AAC045055464E500DB518D /* libdispatch */;
+ targetProxy = E4128E4913B94BCE00ABB2CB /* PBXContainerItemProxy */;
+ };
+ E47D6ECB125FEB9D0070D91C /* PBXTargetDependency */ = {
+ isa = PBXTargetDependency;
+ target = E4EC118F12514302000DDBD1 /* libdispatch up resolved */;
+ targetProxy = E47D6ECA125FEB9D0070D91C /* PBXContainerItemProxy */;
+ };
+ E47D6ECD125FEBA10070D91C /* PBXTargetDependency */ = {
+ isa = PBXTargetDependency;
+ target = E4EC121612514715000DDBD1 /* libdispatch mp resolved */;
+ targetProxy = E47D6ECC125FEBA10070D91C /* PBXContainerItemProxy */;
+ };
+/* End PBXTargetDependency section */
+
/* Begin XCBuildConfiguration section */
1DEB91ED08733DB70010E9CD /* Release */ = {
isa = XCBuildConfiguration;
+ baseConfigurationReference = E40041AA125D705F0022B135 /* libdispatch-resolver.xcconfig */;
buildSettings = {
- COPY_PHASE_STRIP = NO;
- CURRENT_PROJECT_VERSION = "$(RC_ProjectSourceVersion)";
- EXECUTABLE_PREFIX = "";
- GCC_CW_ASM_SYNTAX = NO;
- GCC_ENABLE_CPP_EXCEPTIONS = NO;
- GCC_ENABLE_CPP_RTTI = NO;
- GCC_ENABLE_OBJC_EXCEPTIONS = NO;
- GCC_OPTIMIZATION_LEVEL = s;
- GCC_PREPROCESSOR_DEFINITIONS = (
- "DISPATCH_NO_LEGACY=1",
- "__DARWIN_NON_CANCELABLE=1",
- );
- GENERATE_MASTER_OBJECT_FILE = NO;
- INSTALL_PATH = /usr/local/lib/system;
- LINK_WITH_STANDARD_LIBRARIES = NO;
- OTHER_CFLAGS = (
- "-fno-unwind-tables",
- "-fno-exceptions",
- "-I$(SDKROOT)/System/Library/Frameworks/System.framework/PrivateHeaders",
- "-fdiagnostics-show-option",
- "-fsched-interblock",
- "-freorder-blocks",
- "-Xarch_x86_64",
- "-momit-leaf-frame-pointer",
- "-Xarch_i386",
- "-momit-leaf-frame-pointer",
- );
- OTHER_CFLAGS_debug = "-O0 -fstack-protector -fno-inline -DDISPATCH_DEBUG=1";
- PRIVATE_HEADERS_FOLDER_PATH = /usr/local/include/dispatch;
- PRODUCT_NAME = libdispatch;
- PUBLIC_HEADERS_FOLDER_PATH = /usr/include/dispatch;
- SEPARATE_STRIP = NO;
- VERSIONING_SYSTEM = "apple-generic";
- VERSION_INFO_PREFIX = __;
};
name = Release;
};
1DEB91F108733DB70010E9CD /* Release */ = {
isa = XCBuildConfiguration;
+ baseConfigurationReference = E43D93F11097917E004F6A62 /* libdispatch.xcconfig */;
buildSettings = {
- ALWAYS_SEARCH_USER_PATHS = NO;
- ARCHS = "$(ARCHS_STANDARD_32_64_BIT)";
- BUILD_VARIANTS = (
- normal,
- debug,
- profile,
- );
- COPY_PHASE_STRIP = NO;
- DEBUG_INFORMATION_FORMAT = "dwarf-with-dsym";
- GCC_ENABLE_PASCAL_STRINGS = NO;
- GCC_OPTIMIZATION_LEVEL = s;
- GCC_STRICT_ALIASING = YES;
- GCC_SYMBOLS_PRIVATE_EXTERN = YES;
- GCC_TREAT_WARNINGS_AS_ERRORS = YES;
- GCC_WARN_64_TO_32_BIT_CONVERSION = YES;
- GCC_WARN_ABOUT_MISSING_NEWLINE = YES;
- GCC_WARN_ABOUT_MISSING_PROTOTYPES = YES;
- GCC_WARN_ABOUT_RETURN_TYPE = YES;
- GCC_WARN_SHADOW = YES;
- GCC_WARN_UNUSED_VARIABLE = YES;
- HEADER_SEARCH_PATHS = (
- "$(SDKROOT)/System/Library/Frameworks/System.framework/PrivateHeaders",
- "$(PROJECT_DIR)",
- );
- LINK_WITH_STANDARD_LIBRARIES = YES;
- ONLY_ACTIVE_ARCH = NO;
- OTHER_CFLAGS = (
- "-fdiagnostics-show-option",
- "-fsched-interblock",
- "-freorder-blocks",
- "-Xarch_x86_64",
- "-momit-leaf-frame-pointer",
- "-Xarch_i386",
- "-momit-leaf-frame-pointer",
- );
- OTHER_CFLAGS_debug = "-O0 -fstack-protector -fno-inline -DDISPATCH_DEBUG=1";
- PREBINDING = NO;
- STRIP_INSTALLED_PRODUCT = NO;
- WARNING_CFLAGS = (
- "-Wall",
- "-Wextra",
- "-Waggregate-return",
- "-Wfloat-equal",
- "-Wpacked",
- "-Wmissing-declarations",
- "-Wstrict-overflow=4",
- "-Wstrict-aliasing=2",
- "-Wno-unused-parameter",
- );
};
name = Release;
};
- 721EB47A0F69D26F00845379 /* Release */ = {
+ 3F3C9357128E637B0042B1F7 /* Release */ = {
isa = XCBuildConfiguration;
buildSettings = {
};
name = Release;
};
- 7276FCBB0EB10E0F00F7F487 /* Release */ = {
+ 3F3C9358128E637B0042B1F7 /* Debug */ = {
+ isa = XCBuildConfiguration;
+ buildSettings = {
+ };
+ name = Debug;
+ };
+ C927F35B10FD7F0600C5AB8B /* Release */ = {
isa = XCBuildConfiguration;
buildSettings = {
};
name = Release;
};
+ C927F35C10FD7F0600C5AB8B /* Debug */ = {
+ isa = XCBuildConfiguration;
+ buildSettings = {
+ };
+ name = Debug;
+ };
+ E49F24D9125D57FA0057C971 /* Release */ = {
+ isa = XCBuildConfiguration;
+ buildSettings = {
+ };
+ name = Release;
+ };
+ E49F24DA125D57FA0057C971 /* Debug */ = {
+ isa = XCBuildConfiguration;
+ buildSettings = {
+ };
+ name = Debug;
+ };
+ E4EB382D1089033000C33AD4 /* Debug */ = {
+ isa = XCBuildConfiguration;
+ baseConfigurationReference = E43D93F11097917E004F6A62 /* libdispatch.xcconfig */;
+ buildSettings = {
+ BUILD_VARIANTS = debug;
+ ONLY_ACTIVE_ARCH = YES;
+ };
+ name = Debug;
+ };
+ E4EB382E1089033000C33AD4 /* Debug */ = {
+ isa = XCBuildConfiguration;
+ baseConfigurationReference = E40041AA125D705F0022B135 /* libdispatch-resolver.xcconfig */;
+ buildSettings = {
+ };
+ name = Debug;
+ };
+ E4EC11BD12514302000DDBD1 /* Release */ = {
+ isa = XCBuildConfiguration;
+ baseConfigurationReference = E40041A9125D70590022B135 /* libdispatch-resolved.xcconfig */;
+ buildSettings = {
+ DISPATCH_RESOLVED_VARIANT = up;
+ };
+ name = Release;
+ };
+ E4EC11BE12514302000DDBD1 /* Debug */ = {
+ isa = XCBuildConfiguration;
+ baseConfigurationReference = E40041A9125D70590022B135 /* libdispatch-resolved.xcconfig */;
+ buildSettings = {
+ DISPATCH_RESOLVED_VARIANT = up;
+ };
+ name = Debug;
+ };
+ E4EC122712514715000DDBD1 /* Release */ = {
+ isa = XCBuildConfiguration;
+ baseConfigurationReference = E40041A9125D70590022B135 /* libdispatch-resolved.xcconfig */;
+ buildSettings = {
+ DISPATCH_RESOLVED_VARIANT = mp;
+ };
+ name = Release;
+ };
+ E4EC122812514715000DDBD1 /* Debug */ = {
+ isa = XCBuildConfiguration;
+ baseConfigurationReference = E40041A9125D70590022B135 /* libdispatch-resolved.xcconfig */;
+ buildSettings = {
+ DISPATCH_RESOLVED_VARIANT = mp;
+ };
+ name = Debug;
+ };
/* End XCBuildConfiguration section */
/* Begin XCConfigurationList section */
@@ -513,6 +1080,7 @@
isa = XCConfigurationList;
buildConfigurations = (
1DEB91ED08733DB70010E9CD /* Release */,
+ E4EB382E1089033000C33AD4 /* Debug */,
);
defaultConfigurationIsVisible = 0;
defaultConfigurationName = Release;
@@ -521,22 +1089,52 @@
isa = XCConfigurationList;
buildConfigurations = (
1DEB91F108733DB70010E9CD /* Release */,
+ E4EB382D1089033000C33AD4 /* Debug */,
);
defaultConfigurationIsVisible = 0;
defaultConfigurationName = Release;
};
- 721EB4850F69D2A600845379 /* Build configuration list for PBXLegacyTarget "testbots" */ = {
+ 3F3C9356128E637B0042B1F7 /* Build configuration list for PBXAggregateTarget "libdispatch_Sim" */ = {
isa = XCConfigurationList;
buildConfigurations = (
- 721EB47A0F69D26F00845379 /* Release */,
+ 3F3C9357128E637B0042B1F7 /* Release */,
+ 3F3C9358128E637B0042B1F7 /* Debug */,
);
defaultConfigurationIsVisible = 0;
defaultConfigurationName = Release;
};
- 7276FCC80EB10E2300F7F487 /* Build configuration list for PBXLegacyTarget "test" */ = {
+ C927F35E10FD7F0B00C5AB8B /* Build configuration list for PBXAggregateTarget "libdispatch_tools" */ = {
isa = XCConfigurationList;
buildConfigurations = (
- 7276FCBB0EB10E0F00F7F487 /* Release */,
+ C927F35B10FD7F0600C5AB8B /* Release */,
+ C927F35C10FD7F0600C5AB8B /* Debug */,
+ );
+ defaultConfigurationIsVisible = 0;
+ defaultConfigurationName = Release;
+ };
+ E49F24D8125D57FA0057C971 /* Build configuration list for PBXNativeTarget "libdispatch no resolver" */ = {
+ isa = XCConfigurationList;
+ buildConfigurations = (
+ E49F24D9125D57FA0057C971 /* Release */,
+ E49F24DA125D57FA0057C971 /* Debug */,
+ );
+ defaultConfigurationIsVisible = 0;
+ defaultConfigurationName = Release;
+ };
+ E4EC11BC12514302000DDBD1 /* Build configuration list for PBXNativeTarget "libdispatch up resolved" */ = {
+ isa = XCConfigurationList;
+ buildConfigurations = (
+ E4EC11BD12514302000DDBD1 /* Release */,
+ E4EC11BE12514302000DDBD1 /* Debug */,
+ );
+ defaultConfigurationIsVisible = 0;
+ defaultConfigurationName = Release;
+ };
+ E4EC122612514715000DDBD1 /* Build configuration list for PBXNativeTarget "libdispatch mp resolved" */ = {
+ isa = XCConfigurationList;
+ buildConfigurations = (
+ E4EC122712514715000DDBD1 /* Release */,
+ E4EC122812514715000DDBD1 /* Debug */,
);
defaultConfigurationIsVisible = 0;
defaultConfigurationName = Release;
diff --git a/libdispatch.xcodeproj/project.xcworkspace/contents.xcworkspacedata b/libdispatch.xcodeproj/project.xcworkspace/contents.xcworkspacedata
new file mode 100644
index 0000000..23ad996
--- /dev/null
+++ b/libdispatch.xcodeproj/project.xcworkspace/contents.xcworkspacedata
@@ -0,0 +1,6 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<Workspace>
+ <FileRef
+ location = "self:libdispatch.xcodeproj">
+ </FileRef>
+</Workspace>
diff --git a/man/Makefile.am b/man/Makefile.am
index d6d93aa..f57453a 100644
--- a/man/Makefile.am
+++ b/man/Makefile.am
@@ -2,117 +2,88 @@
#
#
-man3_MANS= \
- dispatch.3 \
- dispatch_after.3 \
- dispatch_api.3 \
- dispatch_apply.3 \
- dispatch_async.3 \
- dispatch_benchmark.3 \
+dist_man3_MANS= \
+ dispatch.3 \
+ dispatch_after.3 \
+ dispatch_api.3 \
+ dispatch_apply.3 \
+ dispatch_async.3 \
+ dispatch_data_create.3 \
dispatch_group_create.3 \
- dispatch_object.3 \
- dispatch_once.3 \
+ dispatch_io_create.3 \
+ dispatch_io_read.3 \
+ dispatch_object.3 \
+ dispatch_once.3 \
dispatch_queue_create.3 \
+ dispatch_read.3 \
dispatch_semaphore_create.3 \
dispatch_source_create.3 \
dispatch_time.3
+EXTRA_DIST= \
+ dispatch_benchmark.3
+
#
-# Install man page symlinks. Is there a better way to do this in automake?
+# Install man page hardlinks. Is there a better way to do this in automake?
#
+
+LN=ln
+
install-data-hook:
cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_after.3 dispatch_after_f.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_apply.3 dispatch_apply_f.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_async.3 dispatch_sync.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_async.3 dispatch_async_f.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_async.3 dispatch_sync_f.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_benchmark.3 dispatch_benchmark_f.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_group_create.3 dispatch_group_enter.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_group_create.3 dispatch_group_leave.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_group_create.3 dispatch_group_wait.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_group_create.3 dispatch_group_notify.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_group_create.3 dispatch_group_notify_f.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_group_create.3 dispatch_group_async.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_group_create.3 dispatch_group_async_f.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_object.3 dispatch_retain.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_object.3 dispatch_release.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_object.3 dispatch_suspend.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_object.3 dispatch_resume.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_object.3 dispatch_get_context.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_object.3 dispatch_set_context.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_object.3 dispatch_set_finalizer_f.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_once.3 dispatch_once_f.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_queue_create.3 dispatch_queue_get_label.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_queue_create.3 dispatch_get_current_queue.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_queue_create.3 dispatch_get_global_queue.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_queue_create.3 dispatch_get_main_queue.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_queue_create.3 dispatch_main.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_queue_create.3 dispatch_set_target_queue.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_semaphore_create.3 \
- dispatch_semaphore_signal.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_semaphore_create.3 \
- dispatch_semaphore_wait.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_source_create.3 \
- dispatch_source_set_event_handler.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_source_create.3 \
- dispatch_source_set_event_handler_f.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_source_create.3 \
- dispatch_source_set_cancel_handler.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_source_create.3 \
- dispatch_source_set_cancel_handler_f.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_source_create.3 \
- dispatch_source_cancel.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_source_create.3 \
- dispatch_source_testcancel.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_source_create.3 \
- dispatch_source_get_handle.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_source_create.3 dispatch_source_get_mask.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_source_create.3 \
- dispatch_source_get_data.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_source_create.3 \
- dispatch_source_merge_data.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_source_create.3 \
- dispatch_source_set_timer.3
- cd $(DESTDIR)$(mandir)/man3 && \
- $(LN_S) -f dispatch_time.3 dispatch_walltime.3
-
+ $(LN) -f dispatch_after.3 dispatch_after_f.3 && \
+ $(LN) -f dispatch_apply.3 dispatch_apply_f.3 && \
+ $(LN) -f dispatch_async.3 dispatch_sync.3 && \
+ $(LN) -f dispatch_async.3 dispatch_async_f.3 && \
+ $(LN) -f dispatch_async.3 dispatch_sync_f.3 && \
+ $(LN) -f dispatch_group_create.3 dispatch_group_enter.3 && \
+ $(LN) -f dispatch_group_create.3 dispatch_group_leave.3 && \
+ $(LN) -f dispatch_group_create.3 dispatch_group_wait.3 && \
+ $(LN) -f dispatch_group_create.3 dispatch_group_notify.3 && \
+ $(LN) -f dispatch_group_create.3 dispatch_group_notify_f.3 && \
+ $(LN) -f dispatch_group_create.3 dispatch_group_async.3 && \
+ $(LN) -f dispatch_group_create.3 dispatch_group_async_f.3 && \
+ $(LN) -f dispatch_object.3 dispatch_retain.3 && \
+ $(LN) -f dispatch_object.3 dispatch_release.3 && \
+ $(LN) -f dispatch_object.3 dispatch_suspend.3 && \
+ $(LN) -f dispatch_object.3 dispatch_resume.3 && \
+ $(LN) -f dispatch_object.3 dispatch_get_context.3 && \
+ $(LN) -f dispatch_object.3 dispatch_set_context.3 && \
+ $(LN) -f dispatch_object.3 dispatch_set_finalizer_f.3 && \
+ $(LN) -f dispatch_once.3 dispatch_once_f.3 && \
+ $(LN) -f dispatch_queue_create.3 dispatch_queue_get_label.3 && \
+ $(LN) -f dispatch_queue_create.3 dispatch_get_current_queue.3 && \
+ $(LN) -f dispatch_queue_create.3 dispatch_get_global_queue.3 && \
+ $(LN) -f dispatch_queue_create.3 dispatch_get_main_queue.3 && \
+ $(LN) -f dispatch_queue_create.3 dispatch_main.3 && \
+ $(LN) -f dispatch_queue_create.3 dispatch_set_target_queue.3 && \
+ $(LN) -f dispatch_semaphore_create.3 dispatch_semaphore_signal.3 && \
+ $(LN) -f dispatch_semaphore_create.3 dispatch_semaphore_wait.3 && \
+ $(LN) -f dispatch_source_create.3 dispatch_source_set_event_handler.3 && \
+ $(LN) -f dispatch_source_create.3 dispatch_source_set_event_handler_f.3 && \
+ $(LN) -f dispatch_source_create.3 dispatch_source_set_cancel_handler.3 && \
+ $(LN) -f dispatch_source_create.3 dispatch_source_set_cancel_handler_f.3 && \
+ $(LN) -f dispatch_source_create.3 dispatch_source_cancel.3 && \
+ $(LN) -f dispatch_source_create.3 dispatch_source_testcancel.3 && \
+ $(LN) -f dispatch_source_create.3 dispatch_source_get_handle.3 && \
+ $(LN) -f dispatch_source_create.3 dispatch_source_get_mask.3 && \
+ $(LN) -f dispatch_source_create.3 dispatch_source_get_data.3 && \
+ $(LN) -f dispatch_source_create.3 dispatch_source_merge_data.3 && \
+ $(LN) -f dispatch_source_create.3 dispatch_source_set_timer.3 && \
+ $(LN) -f dispatch_time.3 dispatch_walltime.3 && \
+ $(LN) -f dispatch_data_create.3 dispatch_data_create_concat.3 && \
+ $(LN) -f dispatch_data_create.3 dispatch_data_create_subrange.3 && \
+ $(LN) -f dispatch_data_create.3 dispatch_data_create_map.3 && \
+ $(LN) -f dispatch_data_create.3 dispatch_data_apply.3 && \
+ $(LN) -f dispatch_data_create.3 dispatch_data_copy_subrange.3 && \
+ $(LN) -f dispatch_data_create.3 dispatch_data_copy_region.3 && \
+ $(LN) -f dispatch_data_create.3 dispatch_data_get_size.3 && \
+ $(LN) -f dispatch_data_create.3 dispatch_data_copy_subrange.3 && \
+ $(LN) -f dispatch_data_create.3 dispatch_data_empty.3 && \
+ $(LN) -f dispatch_io_create.3 dispatch_io_create_with_path.3 && \
+ $(LN) -f dispatch_io_create.3 dispatch_io_set_high_water.3 && \
+ $(LN) -f dispatch_io_create.3 dispatch_io_set_low_water.3 && \
+ $(LN) -f dispatch_io_create.3 dispatch_io_set_interval.3 && \
+ $(LN) -f dispatch_io_create.3 dispatch_io_close.3 && \
+ $(LN) -f dispatch_io_read.3 dispatch_io_write.3 && \
+ $(LN) -f dispatch_read.3 dispatch_write.3
diff --git a/man/dispatch.3 b/man/dispatch.3
index 65e5659..c55be96 100644
--- a/man/dispatch.3
+++ b/man/dispatch.3
@@ -1,4 +1,4 @@
-.\" Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+.\" Copyright (c) 2008-2010 Apple Inc. All rights reserved.
.Dd May 1, 2009
.Dt dispatch 3
.Os Darwin
@@ -32,7 +32,10 @@
.Xr dispatch_apply 3 ,
.Xr dispatch_async 3 ,
.Xr dispatch_benchmark 3 ,
+.Xr dispatch_data_create 3,
.Xr dispatch_group_create 3 ,
+.Xr dispatch_io_create 3 ,
+.Xr dispatch_io_read 3 ,
.Xr dispatch_object 3 ,
.Xr dispatch_once 3 ,
.Xr dispatch_queue_create 3 ,
diff --git a/man/dispatch_after.3 b/man/dispatch_after.3
index f4871c3..4c55214 100644
--- a/man/dispatch_after.3
+++ b/man/dispatch_after.3
@@ -1,4 +1,4 @@
-.\" Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+.\" Copyright (c) 2008-2010 Apple Inc. All rights reserved.
.Dd May 1, 2009
.Dt dispatch_after 3
.Os Darwin
@@ -35,6 +35,9 @@
For a more detailed description about submitting blocks to queues, see
.Xr dispatch_async 3 .
.Sh CAVEATS
+.Fn dispatch_after
+retains the passed queue.
+.Pp
Specifying
.Vt DISPATCH_TIME_NOW
as the
@@ -42,11 +45,13 @@
parameter
is supported, but is not as efficient as calling
.Fn dispatch_async .
+.Pp
The result of passing
.Vt DISPATCH_TIME_FOREVER
as the
.Fa when
parameter is undefined.
+.Pp
.Sh FUNDAMENTALS
The
.Fn dispatch_after
diff --git a/man/dispatch_apply.3 b/man/dispatch_apply.3
index 6bd7f4b..5a43a0a 100644
--- a/man/dispatch_apply.3
+++ b/man/dispatch_apply.3
@@ -1,4 +1,4 @@
-.\" Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+.\" Copyright (c) 2008-2010 Apple Inc. All rights reserved.
.Dd May 1, 2009
.Dt dispatch_apply 3
.Os Darwin
@@ -20,7 +20,7 @@
.Fn dispatch_apply
function provides data-level concurrency through a "for (;;)" loop like primitive:
.Bd -literal
-dispatch_queue_t the_queue = dispatch_get_concurrent_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT);
+dispatch_queue_t the_queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
size_t iterations = 10;
// 'idx' is zero indexed, just like:
@@ -47,7 +47,7 @@
Start with a stride of one and work upwards until the desired performance is
achieved (perhaps using a power of two search):
.Bd -literal
-#define STRIDE 3
+#define STRIDE 3
dispatch_apply(count / STRIDE, queue, ^(size_t idx) {
size_t j = idx * STRIDE;
@@ -62,6 +62,17 @@
printf("%zu\\n", i);
}
.Ed
+.Sh IMPLIED REFERENCES
+Synchronous functions within the dispatch framework hold an implied reference
+on the target queue. In other words, the synchronous function borrows the
+reference of the calling function (this is valid because the calling function
+is blocked waiting for the result of the synchronous function, and therefore
+cannot modify the reference count of the target queue until after the
+synchronous function has returned).
+.Pp
+This is in contrast to asynchronous functions which must retain both the block
+and target queue for the duration of the asynchronous operation (as the calling
+function may immediately release its interest in these objects).
.Sh FUNDAMENTALS
Conceptually,
.Fn dispatch_apply
@@ -74,6 +85,17 @@
.Fn dispatch_apply
function is a wrapper around
.Fn dispatch_apply_f .
+.Sh CAVEATS
+Unlike
+.Fn dispatch_async ,
+a block submitted to
+.Fn dispatch_apply
+is expected to be either independent or dependent
+.Em only
+on work already performed in lower-indexed invocations of the block. If
+the block's index dependency is non-linear, it is recommended to
+use a for-loop around invocations of
+.Fn dispatch_async .
.Sh SEE ALSO
.Xr dispatch 3 ,
.Xr dispatch_async 3 ,
diff --git a/man/dispatch_async.3 b/man/dispatch_async.3
index 4664397..9c09bb2 100644
--- a/man/dispatch_async.3
+++ b/man/dispatch_async.3
@@ -1,4 +1,4 @@
-.\" Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+.\" Copyright (c) 2008-2010 Apple Inc. All rights reserved.
.Dd May 1, 2009
.Dt dispatch_async 3
.Os Darwin
@@ -115,14 +115,14 @@
// This is just an example of nested blocks.
dispatch_retain(destination_queue);
-
+
dispatch_async(obj->queue, ^{
ssize_t r = read(obj->fd, where, bytes);
int err = errno;
dispatch_async(destination_queue, ^{
reply_block(r, err);
- });
+ });
dispatch_release(destination_queue);
});
}
@@ -171,7 +171,7 @@
against queue A, which deadlocks both examples. This is bug-for-bug compatible
with nontrivial pthread usage. In fact, nontrivial reentrancy is impossible to
support in recursive locks once the ultimate level of reentrancy is deployed
-(IPC or RPC).
+(IPC or RPC).
.Sh IMPLIED REFERENCES
Synchronous functions within the dispatch framework hold an implied reference
on the target queue. In other words, the synchronous function borrows the
diff --git a/man/dispatch_data_create.3 b/man/dispatch_data_create.3
new file mode 100644
index 0000000..96965f2
--- /dev/null
+++ b/man/dispatch_data_create.3
@@ -0,0 +1,206 @@
+.\" Copyright (c) 2010 Apple Inc. All rights reserved.
+.Dd December 1, 2010
+.Dt dispatch_data_create 3
+.Os Darwin
+.Sh NAME
+.Nm dispatch_data_create ,
+.Nm dispatch_data_create_concat ,
+.Nm dispatch_data_create_subrange ,
+.Nm dispatch_data_create_map ,
+.Nm dispatch_data_apply ,
+.Nm dispatch_data_copy_region ,
+.Nm dispatch_data_get_size
+.Nd create and manipulate dispatch data objects
+.Sh SYNOPSIS
+.Fd #include <dispatch/dispatch.h>
+.Ft dispatch_data_t
+.Fo dispatch_data_create
+.Fa "const void* buffer"
+.Fa "size_t size"
+.Fa "dispatch_queue_t queue"
+.Fa "dispatch_block_t destructor"
+.Fc
+.Ft dispatch_data_t
+.Fo dispatch_data_create_concat
+.Fa "dispatch_data_t data1"
+.Fa "dispatch_data_t data2"
+.Fc
+.Ft dispatch_data_t
+.Fo dispatch_data_create_subrange
+.Fa "dispatch_data_t data"
+.Fa "size_t offset"
+.Fa "size_t length"
+.Fc
+.Ft dispatch_data_t
+.Fo dispatch_data_create_map
+.Fa "dispatch_data_t data"
+.Fa "const void **buffer_ptr"
+.Fa "size_t *size_ptr"
+.Fc
+.Ft bool
+.Fo dispatch_data_apply
+.Fa "dispatch_data_t data"
+.Fa "bool (^applier)(dispatch_data_t, size_t, const void *, size_t)"
+.Fc
+.Ft dispatch_data_t
+.Fo dispatch_data_copy_region
+.Fa "dispatch_data_t data"
+.Fa "size_t location"
+.Fa "size_t *offset_ptr"
+.Fc
+.Ft size_t
+.Fo dispatch_data_get_size
+.Fa "dispatch_data_t data"
+.Fc
+.Vt dispatch_data_t dispatch_data_empty ;
+.Sh DESCRIPTION
+Dispatch data objects are opaque containers of bytes that represent one or more
+regions of memory. They are created either from memory buffers managed by the
+application or the system or from other dispatch data objects. Dispatch data
+objects are immutable and the memory regions they represent are required to
+remain unchanged for the lifetime of all data objects that reference them.
+Dispatch data objects avoid copying the represented memory as much as possible.
+Multiple data objects can represent the same memory regions or subsections
+thereof.
+.Sh CREATION
+The
+.Fn dispatch_data_create
+function creates a new dispatch data object of given
+.Fa size
+from a
+.Fa buffer .
+The provided
+.Fa destructor
+block will be submitted to the specified
+.Fa queue
+when the object reaches the end of its lifecycle, indicating that the system no
+longer references the
+.Fa buffer .
+This allows the application to deallocate
+the associated storage. The
+.Fa queue
+argument is ignored if one of the following predefined destructors is passed:
+.Bl -tag -width DISPATCH_DATA_DESTRUCTOR_DEFAULT -compact -offset indent
+.It DISPATCH_DATA_DESTRUCTOR_FREE
+indicates that the provided buffer can be deallocated with
+.Xr free 3
+directly.
+.It DISPATCH_DATA_DESTRUCTOR_DEFAULT
+indicates that the provided buffer is not managed by the application and should
+be copied into memory managed and automatically deallocated by the system.
+.El
+.Pp
+The
+.Fn dispatch_data_create_concat
+function creates a new data object representing the concatenation of the memory
+regions represented by the provided data objects.
+.Pp
+The
+.Fn dispatch_data_create_subrange
+function creates a new data object representing the sub-region of the provided
+.Fa data
+object specified by the
+.Fa offset
+and
+.Fa length
+parameters.
+.Pp
+The
+.Fn dispatch_data_create_map
+function creates a new data object by mapping the memory represented by the
+provided
+.Fa data
+object as a single contiguous memory region (moving or copying memory as
+necessary). If the
+.Fa buffer_ptr
+and
+.Fa size_ptr
+references are not
+.Dv NULL ,
+they are filled with the location and extent of the contiguous region, allowing
+direct read access to the mapped memory. These values are valid only as long as
+the newly created object has not been released.
+.Sh ACCESS
+The
+.Fn dispatch_data_apply
+function provides read access to represented memory without requiring it to be
+mapped as a single contiguous region. It traverses the memory regions
+represented by the
+.Fa data
+argument in logical order, invokes the specified
+.Fa applier
+block for each region and returns a boolean indicating whether traversal
+completed successfully. The
+.Fa applier
+block is passed the following arguments for each memory region and returns a
+boolean indicating whether traversal should continue:
+.Bl -tag -width "dispatch_data_t rgn" -compact -offset indent
+.It Fa "dispatch_data_t rgn"
+data object representing the region
+.It Fa "size_t offset"
+logical position of the region in
+.Fa data
+.It Vt "const void *loc"
+memory location of the region
+.It Vt "size_t size"
+extent of the region
+.El
+The
+.Fa rgn
+data object is released by the system when the
+.Fa applier
+block returns.
+The associated memory location
+.Fa loc
+is valid only as long as
+.Fa rgn
+has not been deallocated; if
+.Fa loc
+is needed outside of the
+.Fa applier
+block, the
+.Fa rgn
+object must be retained in the block.
+.Pp
+The
+.Fn dispatch_data_copy_region
+function finds the contiguous memory region containing the logical position
+specified by the
+.Fa location
+argument among the regions represented by the provided
+.Fa data
+object and returns a newly created copy of the data object representing that
+region. The variable specified by the
+.Fa offset_ptr
+argument is filled with the logical position where the returned object starts
+in the
+.Fa data
+object.
+.Pp
+The
+.Fn dispatch_data_get_size
+function returns the logical size of the memory region or regions represented
+by the provided
+.Fa data
+object.
+.Sh EMPTY DATA OBJECT
+The
+.Vt dispatch_data_empty
+object is the global singleton object representing a zero-length memory region.
+It is a valid input to any dispatch_data functions that take data object
+parameters.
+.Sh MEMORY MODEL
+Dispatch data objects are retained and released via calls to
+.Fn dispatch_retain
+and
+.Fn dispatch_release .
+Data objects passed as arguments to a dispatch data
+.Sy create
+or
+.Sy copy
+function can be released when the function returns. The newly created object
+holds implicit references to their constituent memory regions as necessary.
+.Sh SEE ALSO
+.Xr dispatch 3 ,
+.Xr dispatch_object 3 ,
+.Xr dispatch_io_read 3
diff --git a/man/dispatch_group_create.3 b/man/dispatch_group_create.3
index df85a54..1dae0ef 100644
--- a/man/dispatch_group_create.3
+++ b/man/dispatch_group_create.3
@@ -84,7 +84,7 @@
by submitting the
.Fa block
to the specified
-.Fa queue
+.Fa queue
once all blocks associated with the
.Fa group
have completed.
diff --git a/man/dispatch_io_create.3 b/man/dispatch_io_create.3
new file mode 100644
index 0000000..9087442
--- /dev/null
+++ b/man/dispatch_io_create.3
@@ -0,0 +1,238 @@
+.\" Copyright (c) 2010 Apple Inc. All rights reserved.
+.Dd December 1, 2010
+.Dt dispatch_io_create 3
+.Os Darwin
+.Sh NAME
+.Nm dispatch_io_create ,
+.Nm dispatch_io_create_with_path ,
+.Nm dispatch_io_close ,
+.Nm dispatch_io_set_high_water ,
+.Nm dispatch_io_set_low_water ,
+.Nm dispatch_io_set_interval
+.Nd open, close and configure dispatch I/O channels
+.Sh SYNOPSIS
+.Fd #include <dispatch/dispatch.h>
+.Ft dispatch_io_t
+.Fo dispatch_io_create
+.Fa "dispatch_io_type_t type"
+.Fa "int fd"
+.Fa "dispatch_queue_t queue"
+.Fa "void (^cleanup_handler)(int error)"
+.Fc
+.Ft dispatch_io_t
+.Fo dispatch_io_create_with_path
+.Fa "dispatch_io_type_t type"
+.Fa "const char *path"
+.Fa "int oflag"
+.Fa "mode_t mode"
+.Fa "dispatch_queue_t queue"
+.Fa "void (^cleanup_handler)(int error)"
+.Fc
+.Ft void
+.Fo dispatch_io_close
+.Fa "dispatch_io_t channel"
+.Fa "dispatch_io_close_flags_t flags"
+.Fc
+.Ft void
+.Fo dispatch_io_set_high_water
+.Fa "dispatch_io_t channel"
+.Fa "size_t high_water"
+.Fc
+.Ft void
+.Fo dispatch_io_set_low_water
+.Fa "dispatch_io_t channel"
+.Fa "size_t low_water"
+.Fc
+.Ft void
+.Fo dispatch_io_set_interval
+.Fa "dispatch_io_t channel"
+.Fa "uint64_t interval"
+.Fa "dispatch_io_interval_flags_t flags"
+.Fc
+.Sh DESCRIPTION
+The dispatch I/O framework is an API for asynchronous read and write I/O
+operations. It is an application of the ideas and idioms present in the
+.Xr dispatch 3
+framework to device I/O. Dispatch I/O enables an application to more easily
+avoid blocking I/O operations and allows it to more directly express its I/O
+requirements than by using the raw POSIX file API. Dispatch I/O will make a
+best effort to optimize how and when asynchronous I/O operations are performed
+based on the capabilities of the targeted device.
+.Pp
+This page provides details on how to create and configure dispatch I/O
+channels. Reading from and writing to these channels is covered in the
+.Xr dispatch_io_read 3
+page. The dispatch I/O framework also provides the convenience functions
+.Xr dispatch_read 3
+and
+.Xr dispatch_write 3
+for uses that do not require the full functionality provided by I/O channels.
+.Sh FUNDAMENTALS
+A dispatch I/O channel represents the asynchronous I/O policy applied to a file
+descriptor and encapsulates it for the purposes of ownership tracking while
+I/O operations are ongoing.
+.Sh CHANNEL TYPES
+Dispatch I/O channels can have one of the following types:
+.Bl -tag -width DISPATCH_IO_STREAM -compact -offset indent
+.It DISPATCH_IO_STREAM
+channels that represent a stream of bytes and do not support reads and writes
+at arbitrary offsets, such as pipes or sockets. Channels of this type perform
+read and write operations sequentially at the current file pointer position and
+ignore any offset specified. Depending on the underlying file descriptor, read
+operations may be performed simultaneously with write operations.
+.It DISPATCH_IO_RANDOM
+channels that represent random access files on disk. Only supported for
+seekable file descriptors and paths. Channels of this type may perform
+submitted read and write operations concurrently at the specified offset
+(interpreted relative to the position of the file pointer when the channel was
+created).
+.El
+.Sh CHANNEL OPENING AND CLOSING
+The
+.Fn dispatch_io_create
+and
+.Fn dispatch_io_create_with_path
+functions create a dispatch I/O channel of provided
+.Fa type
+from a file descriptor
+.Fa fd
+or a pathname, respectively. They can be thought of as
+analogous to the
+.Xr fdopen 3
+POSIX function and the
+.Xr fopen 3
+function in the standard C library. For a channel created from a
+pathname, the provided
+.Fa path ,
+.Fa oflag
+and
+.Fa mode
+parameters will be passed to
+.Xr open 2
+when the first I/O operation on the channel is ready to execute. The provided
+.Fa cleanup_handler
+block will be submitted to the specified
+.Fa queue
+when all I/O operations on the channel have completed and is is closed or
+reaches the end of its lifecycle. If an error occurs during channel creation,
+the
+.Fa cleanup_handler
+block will be submitted immediately and passed an
+.Fa error
+parameter with the POSIX error encountered. After creating a dispatch I/O
+channel from a file descriptor, the application must take care not to modify
+that file descriptor until the associated
+.Fa cleanup_handler
+is invoked, see
+.Sx "FILEDESCRIPTOR OWNERSHIP"
+for details.
+.Pp
+The
+.Fn dispatch_io_close
+function closes a dispatch I/O channel to new submissions of I/O operations. If
+.Dv DISPATCH_IO_STOP
+is passed in the
+.Fa flags
+parameter, the system will in addition not perform the I/O operations already
+submitted to the channel that are still pending and will make a best effort to
+interrupt any ongoing operations. Handlers for operations so affected will be
+passed the
+.Er ECANCELED
+error code, along with any partial results.
+.Sh CHANNEL CONFIGURATION
+Dispatch I/O channels have high-water mark, low-water mark and interval
+configuration settings that determine if and when partial results from I/O
+operations are delivered via their associated I/O handlers.
+.Pp
+The
+.Fn dispatch_io_set_high_water
+and
+.Fn dispatch_io_set_low_water
+functions configure the water mark settings of a
+.Fa channel .
+The system will read
+or write at least the number of bytes specified by
+.Fa low_water
+before submitting an I/O handler with partial results, and will make a best
+effort to submit an I/O handler as soon as the number of bytes read or written
+reaches
+.Fa high_water .
+.Pp
+The
+.Fn dispatch_io_set_interval
+function configures the time
+.Fa interval
+at which I/O handlers are submitted (measured in nanoseconds). If
+.Dv DISPATCH_IO_STRICT_INTERVAL
+is passed in the
+.Fa flags
+parameter, the interval will be strictly observed even if there is an
+insufficient amount of data to deliver; otherwise delivery will be skipped for
+intervals where the amount of available data is inferior to the channel's
+low-water mark. Note that the system may defer enqueueing interval I/O handlers
+by a small unspecified amount of leeway in order to align with other system
+activity for improved system performance or power consumption.
+.Pp
+.Sh DATA DELIVERY
+The size of data objects passed to I/O handlers for a channel will never be
+larger than the high-water mark set on the channel; it will also never be
+smaller than the low-water mark, except in the following cases:
+.Bl -dash -offset indent -compact
+.It
+the final handler invocation for an I/O operation
+.It
+EOF was encountered
+.It
+the channel has an interval with the
+.Dv DISPATCH_IO_STRICT_INTERVAL
+flag set
+.El
+Bear in mind that dispatch I/O channels will typically deliver amounts of data
+significantly higher than the low-water mark. The default value for the
+low-water mark is unspecified, but must be assumed to allow intermediate
+handler invocations. The default value for the high-water mark is
+unlimited (i.e.\&
+.Dv SIZE_MAX ) .
+Channels that require intermediate results of fixed size should have both the
+low-water and the high-water mark set to that size. Channels that do not wish
+to receive any intermediate results should have the low-water mark set to
+.Dv SIZE_MAX .
+.Pp
+.Sh FILEDESCRIPTOR OWNERSHIP
+When an application creates a dispatch I/O channel from a file descriptor with
+the
+.Fn dispatch_io_create
+function, the system takes control of that file descriptor until the channel is
+closed, an error occurs on the file descriptor or all references to the channel
+are released. At that time the channel's cleanup handler will be enqueued and
+control over the file descriptor relinquished, making it safe for the
+application to
+.Xr close 2
+the file descriptor. While a file descriptor is under the control of a dispatch
+I/O channel, file descriptor flags such as
+.Dv O_NONBLOCK
+will be modified by the system on behalf of the application. It is an error for
+the application to modify a file descriptor directly while it is under the
+control of a dispatch I/O channel, but it may create further I/O channels
+from that file descriptor or use the
+.Xr dispatch_read 3
+and
+.Xr dispatch_write 3
+convenience functions with that file descriptor. If multiple I/O channels have
+been created from the same file descriptor, all the associated cleanup handlers
+will be submitted together once the last channel has been closed resp.\& all
+references to those channels have been released. If convenience functions have
+also been used on that file descriptor, submission of their handlers will be
+tied to the submission of the channel cleanup handlers as well.
+.Sh MEMORY MODEL
+Dispatch I/O channel objects are retained and released via calls to
+.Fn dispatch_retain
+and
+.Fn dispatch_release .
+.Sh SEE ALSO
+.Xr dispatch 3 ,
+.Xr dispatch_io_read 3 ,
+.Xr dispatch_object 3 ,
+.Xr dispatch_read 3 ,
+.Xr fopen 3 ,
+.Xr open 2
diff --git a/man/dispatch_io_read.3 b/man/dispatch_io_read.3
new file mode 100644
index 0000000..51c3b1c
--- /dev/null
+++ b/man/dispatch_io_read.3
@@ -0,0 +1,151 @@
+.\" Copyright (c) 2010 Apple Inc. All rights reserved.
+.Dd December 1, 2010
+.Dt dispatch_io_read 3
+.Os Darwin
+.Sh NAME
+.Nm dispatch_io_read ,
+.Nm dispatch_io_write
+.Nd submit read and write operations to dispatch I/O channels
+.Sh SYNOPSIS
+.Fd #include <dispatch/dispatch.h>
+.Ft void
+.Fo dispatch_io_read
+.Fa "dispatch_io_t channel"
+.Fa "off_t offset"
+.Fa "size_t length"
+.Fa "dispatch_queue_t queue"
+.Fa "void (^handler)(bool done, dispatch_data_t data, int error)"
+.Fc
+.Ft void
+.Fo dispatch_io_write
+.Fa "dispatch_io_t channel"
+.Fa "off_t offset"
+.Fa "dispatch_data_t dispatch"
+.Fa "dispatch_queue_t queue"
+.Fa "void (^handler)(bool done, dispatch_data_t data, int error)"
+.Fc
+.Sh DESCRIPTION
+The dispatch I/O framework is an API for asynchronous read and write I/O
+operations. It is an application of the ideas and idioms present in the
+.Xr dispatch 3
+framework to device I/O. Dispatch I/O enables an application to more easily
+avoid blocking I/O operations and allows it to more directly express its I/O
+requirements than by using the raw POSIX file API. Dispatch I/O will make a
+best effort to optimize how and when asynchronous I/O operations are performed
+based on the capabilities of the targeted device.
+.Pp
+This page provides details on how to read from and write to dispatch I/O
+channels. Creation and configuration of these channels is covered in the
+.Xr dispatch_io_create 3
+page. The dispatch I/O framework also provides the convenience functions
+.Xr dispatch_read 3
+and
+.Xr dispatch_write 3
+for uses that do not require the full functionality provided by I/O channels.
+.Pp
+.Sh FUNDAMENTALS
+The
+.Fn dispatch_io_read
+and
+.Fn dispatch_io_write
+functions are used to perform asynchronous read and write operations on
+dispatch I/O channels. They can be thought of as asynchronous versions of the
+.Xr fread 3
+and
+.Xr fwrite 3
+functions in the standard C library.
+.Sh READ OPERATIONS
+The
+.Fn dispatch_io_read
+function schedules an I/O read operation on the specified dispatch I/O
+.Va channel .
+As results from the read operation become available, the provided
+.Va handler
+block will be submitted to the specified
+.Va queue .
+The block will be passed a dispatch data object representing the data that has
+been read since the handler's previous invocation.
+.Pp
+The
+.Va offset
+parameter indicates where the read operation should begin. For a channel of
+.Dv DISPATCH_IO_RANDOM
+type it is interpreted relative to the position of the file pointer when the
+channel was created, for a channel of
+.Dv DISPATCH_IO_STREAM
+type it is ignored and the read operation will begin at the current file
+pointer position.
+.Pp
+The
+.Va length
+parameter indicates the number of bytes that should be read from the I/O
+channel. Pass
+.Dv SIZE_MAX
+to keep reading until EOF is encountered (for a channel created from a
+disk-based file this happens when reading past the end of the physical file).
+.Sh WRITE OPERATIONS
+The
+.Fn dispatch_io_write
+function schedules an I/O write operation on the specified dispatch I/O
+.Va channel .
+As the write operation progresses, the provided
+.Va handler
+block will be submitted to the specified
+.Va queue .
+The block will be passed a dispatch data object representing the data that
+remains to be written as part of this I/O operation.
+.Pp
+The
+.Va offset
+parameter indicates where the write operation should begin. It is interpreted
+as for read operations above.
+.Pp
+The
+.Va data
+parameter specifies the location and amount of data to be written, encapsulated
+as a dispatch data object. The object is retained by the system until the write
+operation is complete.
+.Sh I/O HANDLER BLOCKS
+Dispatch I/O handler blocks submitted to a channel via the
+.Fn dispatch_io_read
+or
+.Fn dispatch_io_write
+functions will be executed one or more times depending on system load and the
+channel's configuration settings (see
+.Xr dispatch_io_create 3
+for details). The handler block need not be reentrant safe,
+no new I/O handler instance is submitted until the previously enqueued handler
+block has returned.
+.Pp
+The dispatch
+.Va data
+object passed to an I/O handler block will be released by the system when the
+block returns, if access to the memory buffer it represents is needed outside
+of the handler, the handler block must retain the data object or create a new
+(e.g.\& concatenated) data object from it (see
+.Xr dispatch_data_create 3
+for details).
+.Pp
+Once an I/O handler block is invoked with the
+.Va done
+flag set, the associated I/O operation is complete and that handler block will
+not be run again. If an unrecoverable error occurs while performing the I/O
+operation, the handler block will be submitted with the
+.Va done
+flag set and the appriate POSIX error code in the
+.Va error
+parameter. An invocation of a handler block with the
+.Va done
+flag set, zero
+.Va error
+and
+.Va data
+set to
+.Vt dispatch_data_empty
+indicates that the I/O operation has encountered EOF.
+.Sh SEE ALSO
+.Xr dispatch 3 ,
+.Xr dispatch_data_create 3 ,
+.Xr dispatch_io_create 3 ,
+.Xr dispatch_read 3 ,
+.Xr fread 3
diff --git a/man/dispatch_object.3 b/man/dispatch_object.3
index 0f9758d..29c1621 100644
--- a/man/dispatch_object.3
+++ b/man/dispatch_object.3
@@ -1,4 +1,4 @@
-.\" Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+.\" Copyright (c) 2008-2010 Apple Inc. All rights reserved.
.Dd May 1, 2009
.Dt dispatch_object 3
.Os Darwin
@@ -58,7 +58,7 @@
or resumed with the functions
.Fn dispatch_suspend
and
-.Fn dispatch_resume
+.Fn dispatch_resume
respectively.
The dispatch framework always checks the suspension status before executing a
block, but such changes never affect a block during execution (non-preemptive).
@@ -69,8 +69,9 @@
.Pp
.Em Important :
suspension applies to all aspects of the dispatch object life cycle, including
-the finalizer function and cancellation handler. Therefore it is important to
-balance calls to
+the finalizer function and cancellation handler. Suspending an object causes it
+to be retained and resuming an object causes it to be released. Therefore it is
+important to balance calls to
.Fn dispatch_suspend
and
.Fn dispatch_resume
@@ -91,6 +92,7 @@
reference to the object is released.
This gives the
application an opportunity to free the context data associated with the object.
+The finalizer will be run on the object's target queue.
.Pp
The result of getting or setting the context of an object that is not a
dispatch queue or a dispatch source is undefined.
@@ -99,4 +101,5 @@
.Xr dispatch_group_create 3 ,
.Xr dispatch_queue_create 3 ,
.Xr dispatch_semaphore_create 3 ,
-.Xr dispatch_source_create 3
+.Xr dispatch_source_create 3 ,
+.Xr dispatch_set_target_queue 3
diff --git a/man/dispatch_queue_create.3 b/man/dispatch_queue_create.3
index 83f5058..9b3e6a9 100644
--- a/man/dispatch_queue_create.3
+++ b/man/dispatch_queue_create.3
@@ -1,4 +1,4 @@
-.\" Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+.\" Copyright (c) 2008-2010 Apple Inc. All rights reserved.
.Dd May 1, 2008
.Dt dispatch_queue_create 3
.Os Darwin
@@ -53,11 +53,13 @@
By default, queues created with
.Fn dispatch_queue_create
wait for the previously dequeued block to complete before dequeuing the next
-block. This FIFO completion behavior is sometimes simply described as a "serial queue."
+block. This FIFO completion behavior is sometimes simply described as a "serial
+queue." All memory writes performed by a block dispatched to a serial queue are
+guaranteed to be visible to subsequent blocks dispatched to the same queue.
Queues are not bound to any specific thread of execution and blocks submitted
-to independent queues may execute concurrently.
-Queues, like all dispatch objects, are reference counted and newly created
-queues have a reference count of one.
+to independent queues may execute concurrently. Queues, like all dispatch
+objects, are reference counted and newly created queues have a reference count
+of one.
.Pp
The optional
.Fa label
@@ -80,13 +82,13 @@
Queues may be temporarily suspended and resumed with the functions
.Fn dispatch_suspend
and
-.Fn dispatch_resume
+.Fn dispatch_resume
respectively. Suspension is checked prior to block execution and is
.Em not
preemptive.
.Sh MAIN QUEUE
-The dispatch framework provides a default serial queue for the application to use.
-This queue is accessed via
+The dispatch framework provides a default serial queue for the application to
+use. This queue is accessed via
.Fn dispatch_get_main_queue .
Programs must call
.Fn dispatch_main
@@ -98,8 +100,8 @@
Unlike the main queue or queues allocated with
.Fn dispatch_queue_create ,
the global concurrent queues schedule blocks as soon as threads become
-available (non-FIFO completion order). The global concurrent queues represent
-three priority bands:
+available (non-FIFO completion order). Four global concurrent queues are
+provided, representing the following priority bands:
.Bl -bullet -compact -offset indent
.It
DISPATCH_QUEUE_PRIORITY_HIGH
@@ -107,12 +109,26 @@
DISPATCH_QUEUE_PRIORITY_DEFAULT
.It
DISPATCH_QUEUE_PRIORITY_LOW
+.It
+DISPATCH_QUEUE_PRIORITY_BACKGROUND
.El
.Pp
-Blocks submitted to the high priority global queue will be invoked before those
-submitted to the default or low priority global queues. Blocks submitted to the
-low priority global queue will only be invoked if no blocks are pending on the
-default or high priority queues.
+The priority of a global concurrent queue controls the scheduling priority of
+the threads created by the system to invoke the blocks submitted to that queue.
+Global queues with lower priority will be scheduled for execution after all
+global queues with higher priority have been scheduled. Additionally, items on
+the background priority global queue will execute on threads with background
+state as described in
+.Xr setpriority 2
+(i.e.\& disk I/O is throttled and the thread's scheduling priority is set to
+lowest value).
+.Pp
+Use the
+.Fn dispatch_get_global_queue
+function to obtain the global queue of given priority. The
+.Fa flags
+argument is reserved for future use and must be zero. Passing any value other
+than zero may result in a NULL return value.
.Pp
.Sh RETURN VALUES
The
@@ -131,13 +147,13 @@
.Pp
The
.Fn dispatch_get_current_queue
-function always returns a valid queue. When called from within a block submitted
-to a dispatch queue, that queue will be returned. If this function is called from
-the main thread before
+function always returns a valid queue. When called from within a block
+submitted to a dispatch queue, that queue will be returned. If this function is
+called from the main thread before
.Fn dispatch_main
is called, then the result of
.Fn dispatch_get_main_queue
-is returned. Otherwise, the result of
+is returned. The result of
.Fo dispatch_get_global_queue
.Fa DISPATCH_QUEUE_PRIORITY_DEFAULT
.Fa 0
@@ -151,55 +167,70 @@
The
.Fn dispatch_set_target_queue
function updates the target queue of the given dispatch object. The target
-queue of an object is responsible for processing the object. Currently only
-dispatch queues and dispatch sources are supported by this function. The result
-of using
-.Fn dispatch_set_target_queue
-with any other dispatch object type is undefined.
+queue of an object is responsible for processing the object.
.Pp
The new target queue is retained by the given object before the previous target
-queue is released. The new target queue will take effect between block
-executions, but not in the middle of any existing block executions
+queue is released. The new target queue setting will take effect between block
+executions on the object, but not in the middle of any existing block executions
(non-preemptive).
.Pp
-The priority of a dispatch queue is inherited by its target queue.
+The default target queue of all dispatch objects created by the application is
+the default priority global concurrent queue. To reset an object's target queue
+to the default, pass the
+.Dv DISPATCH_TARGET_QUEUE_DEFAULT
+constant to
+.Fn dispatch_set_target_queue .
+.Pp
+The priority of a dispatch queue is inherited from its target queue.
In order to change the priority of a queue created with
.Fn dispatch_queue_create ,
use the
.Fn dispatch_get_global_queue
-function to obtain a target queue of the desired priority. The
-.Fa flags
-argument is reserved for future use and must be zero. Passing any value other
-than zero may result in a
-.Vt NULL
-return value.
+function to obtain a target queue of the desired priority.
+.Pp
+Blocks submitted to a serial queue whose target queue is another serial queue
+will not be invoked concurrently with blocks submitted to the target queue or
+to any other queue with that same target queue.
.Pp
The target queue of a dispatch source specifies where its event handler and
cancellation handler blocks will be submitted. See
.Xr dispatch_source_create 3
for more information about dispatch sources.
.Pp
-The result of passing the main queue or a global concurrent queue to the first
+The target queue of a dispatch I/O channel specifies the priority of the global
+queue where its I/O operations are executed. See
+.Xr dispatch_io_create 3
+for more information about dispatch I/O channels.
+.Pp
+For all other dispatch object types, the only function of the target queue is
+to determine where an object's finalizer function is invoked.
+.Pp
+The result of passing the main queue or a global concurrent queue as the first
argument of
.Fn dispatch_set_target_queue
is undefined.
.Pp
-Directly or indirectly setting the target queue of a dispatch queue to itself is undefined.
+Directly or indirectly setting the target queue of a dispatch queue to itself is
+undefined.
.Sh CAVEATS
-Code cannot make any assumptions about the queue returned by
-.Fn dispatch_get_current_queue .
-The returned queue may have arbitrary policies that may surprise code that tries
-to schedule work with the queue. The list of policies includes, but is not
-limited to, queue width (i.e. serial vs. concurrent), scheduling priority,
-security credential or filesystem configuration. Therefore,
+The
.Fn dispatch_get_current_queue
-.Em MUST
-only be used for identity tests or debugging.
+function is only recommended for debugging and logging purposes. Code must not
+make any assumptions about the queue returned, unless it is one of the global
+queues or a queue the code has itself created. The returned queue may have
+arbitrary policies that may surprise code that tries to schedule work with the
+queue. The list of policies includes, but is not limited to, queue width (i.e.
+serial vs. concurrent), scheduling priority, security credential or filesystem
+configuration.
+.Pp
+It is equally unsafe for code to assume that synchronous execution onto a queue
+is safe from deadlock if that queue is not the one returned by
+.Fn dispatch_get_current_queue .
.Sh COMPATIBILITY
Cocoa applications need not call
.Fn dispatch_main .
-Blocks submitted to the main queue will be executed as part of the "common modes"
-of the application's main NSRunLoop or CFRunLoop.
+Blocks submitted to the main queue will be executed as part of the "common
+modes" of the application's main NSRunLoop or CFRunLoop.
However, blocks submitted to the main queue in applications using
.Fn dispatch_main
are not guaranteed to execute on the main thread.
@@ -302,7 +333,7 @@
.Va errno
is a per-thread variable and must be copied out explicitly as the block may be
invoked on different thread of execution than the caller. Another example of
-per-thread data that would need to be copied is the use of
+per-thread data that would need to be copied is the use of
.Fn getpwnam
instead of
.Fn getpwnam_r .
diff --git a/man/dispatch_read.3 b/man/dispatch_read.3
new file mode 100644
index 0000000..38e88de
--- /dev/null
+++ b/man/dispatch_read.3
@@ -0,0 +1,123 @@
+.\" Copyright (c) 2010 Apple Inc. All rights reserved.
+.Dd December 1, 2010
+.Dt dispatch_read 3
+.Os Darwin
+.Sh NAME
+.Nm dispatch_read ,
+.Nm dispatch_write
+.Nd asynchronously read from and write to file descriptors
+.Sh SYNOPSIS
+.Fd #include <dispatch/dispatch.h>
+.Ft void
+.Fo dispatch_read
+.Fa "int fd"
+.Fa "size_t length"
+.Fa "dispatch_queue_t queue"
+.Fa "void (^handler)(dispatch_data_t data, int error)"
+.Fc
+.Ft void
+.Fo dispatch_write
+.Fa "int fd"
+.Fa "dispatch_data_t data"
+.Fa "dispatch_queue_t queue"
+.Fa "void (^handler)(dispatch_data_t data, int error))"
+.Fc
+.Sh DESCRIPTION
+The
+.Fn dispatch_read
+and
+.Fn dispatch_write
+functions asynchronously read from and write to POSIX file descriptors. They
+can be thought of as asynchronous, callback-based versions of the
+.Fn fread
+and
+.Fn fwrite
+functions provided by the standard C library. They are convenience functions
+based on the
+.Xr dispatch_io_read 3
+and
+.Xr dispatch_io_write 3
+functions, intended for simple one-shot read or write requests. Multiple
+request on the same file desciptor are better handled with the full underlying
+dispatch I/O channel functions.
+.Sh BEHAVIOR
+The
+.Fn dispatch_read
+function schedules an asynchronous read operation on the file descriptor
+.Va fd .
+Once the file descriptor is readable, the system will read as much data as is
+currently available, up to the specified
+.Va length ,
+starting at the current file pointer position. The given
+.Va handler
+block will be submitted to
+.Va queue
+when the operation completes or an error occurs. The block will be passed a
+dispatch
+.Va data
+object with the result of the read operation. If an error occurred while
+reading from the file descriptor, the
+.Va error
+parameter to the block will be set to the appropriate POSIX error code and
+.Va data
+will contain any data that could be read successfully. If the file pointer
+position is at end-of-file, emtpy
+.Va data
+and zero
+.Va error
+will be passed to the handler block.
+.Pp
+The
+.Fn dispatch_write
+function schedules an asynchronous write operation on the file descriptor
+.Va fd .
+The system will attempt to write the entire contents of the provided
+.Va data
+object to
+.Va fd
+at the current file pointer position. The given
+.Va handler
+block will be submitted to
+.Va queue
+when the operation completes or an error occurs. If the write operation
+completed successfully, the
+.Va error
+parameter to the block will be set to zero, otherwise it will be set to the
+appropriate POSIX error code and the
+.Va data
+parameter will contain any data that could not be written.
+.Sh CAVEATS
+The
+.Va data
+object passed to a
+.Va handler
+block is released by the system when the block returns. If
+.Va data
+is needed outside of the handler block, it must concatenate, copy, or retain
+it.
+.Pp
+Once an asynchronous read or write operation has been submitted on a file
+descriptor
+.Va fd ,
+the system takes control of that file descriptor until the
+.Va handler
+block is executed. During this time the application must not manipulate
+.Va fd
+directly, in particular it is only safe to close
+.Va fd
+from the handler block (or after it has returned).
+.Pp
+If multiple asynchronous read or write operations are submitted to the same
+file descriptor, they will be performed in order, but their handlers will only
+be submitted once all operations have completed and control over the file
+descriptor has been relinquished. For details on this and on the interaction
+with dispatch I/O channels created from the same file descriptor, see
+.Sx FILEDESCRIPTOR OWNERSHIP
+in
+.Xr dispatch_io_create 3 .
+.Sh SEE ALSO
+.Xr dispatch 3 ,
+.Xr dispatch_data_create 3 ,
+.Xr dispatch_io_create 3 ,
+.Xr dispatch_io_read 3 ,
+.Xr fread 3
diff --git a/man/dispatch_semaphore_create.3 b/man/dispatch_semaphore_create.3
index 54b64a2..096e0e3 100644
--- a/man/dispatch_semaphore_create.3
+++ b/man/dispatch_semaphore_create.3
@@ -1,4 +1,4 @@
-.\" Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+.\" Copyright (c) 2008-2010 Apple Inc. All rights reserved.
.Dd May 1, 2009
.Dt dispatch_semaphore_create 3
.Os Darwin
@@ -33,7 +33,8 @@
.Sh COMPLETION SYNCHRONIZATION
If the
.Fa count
-parameter is equal to zero, then the semaphore is useful for synchronizing completion of work.
+parameter is equal to zero, then the semaphore is useful for synchronizing
+completion of work.
For example:
.Bd -literal -offset indent
sema = dispatch_semaphore_create(0);
@@ -50,7 +51,8 @@
.Sh FINITE RESOURCE POOL
If the
.Fa count
-parameter is greater than zero, then the semaphore is useful for managing a finite pool of resources.
+parameter is greater than zero, then the semaphore is useful for managing a
+finite pool of resources.
For example, a library that wants to limit Unix descriptor usage:
.Bd -literal -offset indent
sema = dispatch_semaphore_create(getdtablesize() / 4);
@@ -81,7 +83,8 @@
.Pp
The
.Fn dispatch_semaphore_wait
-function returns zero upon success and non-zero after the timeout expires. If the timeout is DISPATCH_TIME_FOREVER, then
+function returns zero upon success and non-zero after the timeout expires. If
+the timeout is DISPATCH_TIME_FOREVER, then
.Fn dispatch_semaphore_wait
waits forever and always returns zero.
.Sh MEMORY MODEL
@@ -90,6 +93,15 @@
and
.Fn dispatch_release .
.Sh CAVEATS
+Unbalanced dispatch semaphores cannot be released.
+For a given semaphore, calls to
+.Fn dispatch_semaphore_signal
+and
+.Fn dispatch_semaphore_wait
+must be balanced before
+.Fn dispatch_release
+is called on it.
+.Pp
Dispatch semaphores are strict counting semaphores.
In other words, dispatch semaphores do not saturate at any particular value.
Saturation can be achieved through atomic compare-and-swap logic.
diff --git a/man/dispatch_source_create.3 b/man/dispatch_source_create.3
index c5b0113..1d774a9 100644
--- a/man/dispatch_source_create.3
+++ b/man/dispatch_source_create.3
@@ -1,4 +1,4 @@
-.\" Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+.\" Copyright (c) 2008-2010 Apple Inc. All rights reserved.
.Dd May 1, 2009
.Dt dispatch_source_create 3
.Os Darwin
@@ -38,7 +38,7 @@
.Fo dispatch_source_cancel
.Fa "dispatch_source_t source"
.Fc
-.Ft void
+.Ft long
.Fo dispatch_source_testcancel
.Fa "dispatch_source_t source"
.Fc
@@ -67,7 +67,7 @@
.Fa "uint64_t leeway"
.Fc
.Sh DESCRIPTION
-Dispatch event sources may be used to monitor a variety of system objects and
+Dispatch event sources may be used to monitor a variety of system objects and
events including file descriptors, mach ports, processes, virtual filesystem
nodes, signal delivery and timers.
.Pp
@@ -81,9 +81,17 @@
.Fn dispatch_retain
and
.Fn dispatch_release
-respectively. Newly created sources are created in a suspended state. After the
-source has been configured by setting an event handler, cancellation handler,
-context, etc., the source must be activated by a call to
+respectively. The
+.Fa queue
+parameter specifies the target queue of the new source object, it will
+be retained by the source object. Pass the
+.Dv DISPATCH_TARGET_QUEUE_DEFAULT
+constant to use the default target queue (the default priority global
+concurrent queue).
+.Pp
+Newly created sources are created in a suspended state. After the source has
+been configured by setting an event handler, cancellation handler, context,
+etc., the source must be activated by a call to
.Fn dispatch_resume
before any events will be delivered.
.Pp
@@ -117,21 +125,21 @@
.Fa mask
arguments to
.Fn dispatch_source_create
-and the return values of the
+and the return values of the
.Fn dispatch_source_get_handle ,
.Fn dispatch_source_get_mask ,
and
-.Fn dispatch_source_get_data
+.Fn dispatch_source_get_data
functions should be interpreted according to the type of the dispatch source.
.Pp
-The
+The
.Fn dispatch_source_get_handle
function
returns the underlying handle to the dispatch source (i.e. file descriptor,
mach port, process identifer, etc.). The result of this function may be cast
directly to the underlying type.
.Pp
-The
+The
.Fn dispatch_source_get_mask
function
returns the set of flags that were specified at source creation time via the
@@ -216,7 +224,7 @@
.Vt DISPATCH_SOURCE_TYPE_DATA_OR
.Pp
Sources of this type allow applications to manually trigger the source's event
-handler via a call to
+handler via a call to
.Fn dispatch_source_merge_data .
The data will be merged with the source's pending data via an atomic add or
logic OR (based on the source's type), and the event handler block will be
@@ -268,7 +276,7 @@
may be one or more of the following:
.Bl -tag -width "XXDISPATCH_PROC_SIGNAL" -compact -offset indent
.It \(bu DISPATCH_PROC_EXIT
-The process has exited and is available to
+The process has exited and is available to
.Xr wait 2 .
.It \(bu DISPATCH_PROC_FORK
The process has created one or more child processes.
@@ -277,7 +285,7 @@
.Xr execve 2
or
.Xr posix_spawn 2 .
-.It \(bu DISPATCH_PROC_REAP
+.It \(bu DISPATCH_PROC_REAP
The process status has been collected by its parent process via
.Xr wait 2 .
.It \(bu DISPATCH_PROC_SIGNAL
@@ -378,7 +386,7 @@
in nanoseconds, specifies the period at which the timer should repeat. All
timers will repeat indefinitely until
.Fn dispatch_source_cancel
-is called. The
+is called. The
.Fa leeway ,
in nanoseconds, is a hint to the system that it may defer the timer in order to
align with other system activity for improved system performance or reduced
@@ -387,7 +395,7 @@
to be expected for all timers even when a value of zero is used.
.Pp
.Em Note :
-Under the C language, untyped numbers default to the
+Under the C language, untyped numbers default to the
.Vt int
type. This can lead to truncation bugs when arithmetic operations with other
numbers are expected to generate a
@@ -404,7 +412,7 @@
Sources of this type monitor the virtual filesystem nodes for state changes.
The
.Fa handle
-is a file descriptor (int) referencing the node to monitor, and
+is a file descriptor (int) referencing the node to monitor, and
the
.Fa mask
may be one or more of the following:
@@ -423,7 +431,7 @@
.It \(bu DISPATCH_VNODE_RENAME
The referenced node was renamed
.It \(bu DISPATCH_VNODE_REVOKE
-Access to the referenced node was revoked via
+Access to the referenced node was revoked via
.Xr revoke 2
or the underlying fileystem was unmounted.
.El
diff --git a/private/Makefile.am b/private/Makefile.am
new file mode 100644
index 0000000..488ef52
--- /dev/null
+++ b/private/Makefile.am
@@ -0,0 +1,10 @@
+#
+#
+#
+
+noinst_HEADERS= \
+ benchmark.h \
+ private.h \
+ queue_private.h \
+ source_private.h
+
diff --git a/private/benchmark.h b/private/benchmark.h
index 99f95c6..df42a8a 100644
--- a/private/benchmark.h
+++ b/private/benchmark.h
@@ -2,19 +2,19 @@
* Copyright (c) 2008-2009 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -32,7 +32,7 @@
#include <dispatch/base.h> // for HeaderDoc
#endif
-__DISPATCH_BEGIN_DECLS
+__BEGIN_DECLS
/*!
* @function dispatch_benchmark
@@ -64,7 +64,8 @@
* 3) Code bound by critical sections may be inferred by retrograde changes in
* performance as concurrency is increased.
* 3a) Intentional: locks, mutexes, and condition variables.
- * 3b) Accidental: unrelated and frequently modified data on the same cache-line.
+ * 3b) Accidental: unrelated and frequently modified data on the same
+ * cache-line.
*/
#ifdef __BLOCKS__
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
@@ -78,6 +79,6 @@
uint64_t
dispatch_benchmark_f(size_t count, void *ctxt, void (*func)(void *));
-__DISPATCH_END_DECLS
+__END_DECLS
#endif
diff --git a/private/private.h b/private/private.h
index 1af8b9e..9bb0e01 100644
--- a/private/private.h
+++ b/private/private.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -31,7 +31,7 @@
#include <TargetConditionals.h>
#endif
-#if HAVE_MACH
+#if TARGET_OS_MAC
#include <mach/boolean.h>
#include <mach/mach.h>
#include <mach/message.h>
@@ -44,9 +44,9 @@
#endif
#include <pthread.h>
-#ifdef __IPHONE_OS_VERSION_MIN_REQUIRED
-/* iPhone OS does not make any legacy definitions visible */
-#define DISPATCH_NO_LEGACY
+#define DISPATCH_NO_LEGACY 1
+#ifdef DISPATCH_LEGACY // <rdar://problem/7366725>
+#error "Dispatch legacy API unavailable."
#endif
#ifndef __DISPATCH_BUILDING_DISPATCH__
@@ -65,10 +65,6 @@
#include <dispatch/queue_private.h>
#include <dispatch/source_private.h>
-#ifndef DISPATCH_NO_LEGACY
-#include <dispatch/legacy.h>
-#endif
-
#undef __DISPATCH_INDIRECT__
#endif /* !__DISPATCH_BUILDING_DISPATCH__ */
@@ -76,22 +72,18 @@
/* LEGACY: Use DISPATCH_API_VERSION */
#define LIBDISPATCH_VERSION DISPATCH_API_VERSION
-__DISPATCH_BEGIN_DECLS
+__BEGIN_DECLS
DISPATCH_EXPORT DISPATCH_NOTHROW
void
-#if USE_LIBDISPATCH_INIT_CONSTRUCTOR
-libdispatch_init(void) __attribute__ ((constructor));
-#else
libdispatch_init(void);
-#endif
-#if HAVE_MACH
+#if TARGET_OS_MAC
#define DISPATCH_COCOA_COMPAT 1
#if DISPATCH_COCOA_COMPAT
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NOTHROW
+DISPATCH_EXPORT DISPATCH_CONST DISPATCH_WARN_RESULT DISPATCH_NOTHROW
mach_port_t
_dispatch_get_main_queue_port_4CF(void);
@@ -108,6 +100,10 @@
DISPATCH_EXPORT
void (*dispatch_end_thread_4GC)(void);
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_4_3)
+DISPATCH_EXPORT
+void (*dispatch_no_worker_threads_4GC)(void);
+
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
DISPATCH_EXPORT
void *(*_dispatch_begin_NSAutoReleasePool)(void);
@@ -116,25 +112,35 @@
DISPATCH_EXPORT
void (*_dispatch_end_NSAutoReleasePool)(void *);
+#define _dispatch_time_after_nsec(t) \
+ dispatch_time(DISPATCH_TIME_NOW, (t))
+#define _dispatch_time_after_usec(t) \
+ dispatch_time(DISPATCH_TIME_NOW, (t) * NSEC_PER_USEC)
+#define _dispatch_time_after_msec(t) \
+ dispatch_time(DISPATCH_TIME_NOW, (t) * NSEC_PER_MSEC)
+#define _dispatch_time_after_sec(t) \
+ dispatch_time(DISPATCH_TIME_NOW, (t) * NSEC_PER_SEC)
+
#endif
-#endif /* HAVE_MACH */
+#endif /* TARGET_OS_MAC */
/* pthreads magic */
-DISPATCH_NOTHROW void dispatch_atfork_prepare(void);
-DISPATCH_NOTHROW void dispatch_atfork_parent(void);
-DISPATCH_NOTHROW void dispatch_atfork_child(void);
-DISPATCH_NOTHROW void dispatch_init_pthread(pthread_t);
+DISPATCH_EXPORT DISPATCH_NOTHROW void dispatch_atfork_prepare(void);
+DISPATCH_EXPORT DISPATCH_NOTHROW void dispatch_atfork_parent(void);
+DISPATCH_EXPORT DISPATCH_NOTHROW void dispatch_atfork_child(void);
-#if HAVE_MACH
+#if TARGET_OS_MAC
/*
* Extract the context pointer from a mach message trailer.
*/
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
+DISPATCH_EXPORT DISPATCH_PURE DISPATCH_WARN_RESULT DISPATCH_NONNULL_ALL
+DISPATCH_NOTHROW
void *
dispatch_mach_msg_get_context(mach_msg_header_t *msg);
-#endif
+#endif /* TARGET_OS_MAC */
-__DISPATCH_END_DECLS
+__END_DECLS
#endif
diff --git a/private/queue_private.h b/private/queue_private.h
index c157a04..5ec36d0 100644
--- a/private/queue_private.h
+++ b/private/queue_private.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2010 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -32,7 +32,7 @@
#include <dispatch/base.h> // for HeaderDoc
#endif
-__DISPATCH_BEGIN_DECLS
+__BEGIN_DECLS
/*!
@@ -46,42 +46,27 @@
DISPATCH_QUEUE_OVERCOMMIT = 0x2ull,
};
-#define DISPATCH_QUEUE_FLAGS_MASK (DISPATCH_QUEUE_OVERCOMMIT)
-
-#ifdef __BLOCKS__
-__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
-void
-dispatch_barrier_sync(dispatch_queue_t queue, dispatch_block_t block);
-#endif
-
-__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL3 DISPATCH_NOTHROW
-void
-dispatch_barrier_sync_f(dispatch_queue_t dq, void *context, dispatch_function_t work);
-
-#ifdef __BLOCKS__
-__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
-void
-dispatch_barrier_async(dispatch_queue_t queue, dispatch_block_t block);
-#endif
-
-__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL3 DISPATCH_NOTHROW
-void
-dispatch_barrier_async_f(dispatch_queue_t dq, void *context, dispatch_function_t work);
+#define DISPATCH_QUEUE_FLAGS_MASK (DISPATCH_QUEUE_OVERCOMMIT)
/*!
* @function dispatch_queue_set_width
*
* @abstract
- * Set the width of concurrency for a given queue. The default width of a
- * privately allocated queue is one.
+ * Set the width of concurrency for a given queue. The width of a serial queue
+ * is one.
+ *
+ * @discussion
+ * This SPI is DEPRECATED and will be removed in a future release.
+ * Uses of this SPI to make a queue concurrent by setting its width to LONG_MAX
+ * should be replaced by passing DISPATCH_QUEUE_CONCURRENT to
+ * dispatch_queue_create().
+ * Uses of this SPI to limit queue concurrency are not recommended and should
+ * be replaced by alternative mechanisms such as a dispatch semaphore created
+ * with the desired concurrency width.
*
* @param queue
- * The queue to adjust. Passing the main queue, a default concurrent queue or
- * any other default queue will be ignored.
+ * The queue to adjust. Passing the main queue or a global concurrent queue
+ * will be ignored.
*
* @param width
* The new maximum width of concurrency depending on available resources.
@@ -89,19 +74,43 @@
* Negative values are magic values that map to automatic width values.
* Unknown negative values default to DISPATCH_QUEUE_WIDTH_MAX_LOGICAL_CPUS.
*/
-#define DISPATCH_QUEUE_WIDTH_ACTIVE_CPUS -1
+#define DISPATCH_QUEUE_WIDTH_ACTIVE_CPUS -1
#define DISPATCH_QUEUE_WIDTH_MAX_PHYSICAL_CPUS -2
#define DISPATCH_QUEUE_WIDTH_MAX_LOGICAL_CPUS -3
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
-dispatch_queue_set_width(dispatch_queue_t dq, long width);
+dispatch_queue_set_width(dispatch_queue_t dq, long width); // DEPRECATED
+/*!
+ * @function dispatch_set_current_target_queue
+ *
+ * @abstract
+ * Synchronously sets the target queue of the current serial queue.
+ *
+ * @discussion
+ * This SPI is provided for a limited purpose case when calling
+ * dispatch_set_target_queue() is not sufficient. It works similarly to
+ * dispatch_set_target_queue() except the target queue of the current queue
+ * is immediately changed so that pending blocks on the queue will run on the
+ * new target queue. Calling this from outside of a block executing on a serial
+ * queue is undefined.
+ *
+ * @param queue
+ * The new target queue for the object. The queue is retained, and the
+ * previous target queue, if any, is released.
+ * If queue is DISPATCH_TARGET_QUEUE_DEFAULT, set the object's target queue
+ * to the default target queue for the given object type.
+ */
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_5_0)
+DISPATCH_EXPORT DISPATCH_NOTHROW
+void
+dispatch_set_current_target_queue(dispatch_queue_t queue);
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-extern const struct dispatch_queue_offsets_s {
+DISPATCH_EXPORT const struct dispatch_queue_offsets_s {
// always add new fields at the end
const uint16_t dqo_version;
const uint16_t dqo_label;
@@ -117,6 +126,6 @@
} dispatch_queue_offsets;
-__DISPATCH_END_DECLS
+__END_DECLS
#endif
diff --git a/private/source_private.h b/private/source_private.h
index 4b0578d..576f64a 100644
--- a/private/source_private.h
+++ b/private/source_private.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -40,7 +40,16 @@
*/
#define DISPATCH_SOURCE_TYPE_VFS (&_dispatch_source_type_vfs)
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-extern const struct dispatch_source_type_s _dispatch_source_type_vfs;
+DISPATCH_EXPORT const struct dispatch_source_type_s _dispatch_source_type_vfs;
+
+/*!
+ * @const DISPATCH_SOURCE_TYPE_VM
+ * @discussion A dispatch source that monitors virtual memory
+ * The mask is a mask of desired events from dispatch_source_vm_flags_t.
+ */
+#define DISPATCH_SOURCE_TYPE_VM (&_dispatch_source_type_vm)
+__OSX_AVAILABLE_STARTING(__MAC_10_7,__IPHONE_4_3)
+DISPATCH_EXPORT const struct dispatch_source_type_s _dispatch_source_type_vm;
/*!
* @enum dispatch_source_vfs_flags_t
@@ -91,11 +100,17 @@
/*!
* @enum dispatch_source_mach_send_flags_t
*
- * @constant DISPATCH_MACH_SEND_DELETED
- * The receive right corresponding to the given send right was destroyed.
+ * @constant DISPATCH_MACH_SEND_POSSIBLE
+ * The mach port corresponding to the given send right has space available
+ * for messages. Delivered only once a mach_msg() to that send right with
+ * options MACH_SEND_MSG|MACH_SEND_TIMEOUT|MACH_SEND_NOTIFY has returned
+ * MACH_SEND_TIMED_OUT (and not again until the next such mach_msg() timeout).
+ * NOTE: The source must have registered the send right for monitoring with the
+ * system for such a mach_msg() to arm the send-possible notifcation, so
+ * the initial send attempt must occur from a source registration handler.
*/
enum {
- DISPATCH_MACH_SEND_DELETED = 0x2,
+ DISPATCH_MACH_SEND_POSSIBLE = 0x8,
};
/*!
@@ -109,23 +124,40 @@
DISPATCH_PROC_REAP = 0x10000000,
};
-__DISPATCH_BEGIN_DECLS
+/*!
+ * @enum dispatch_source_vm_flags_t
+ *
+ * @constant DISPATCH_VM_PRESSURE
+ * The VM has experienced memory pressure.
+ */
-#if HAVE_MACH
+enum {
+ DISPATCH_VM_PRESSURE = 0x80000000,
+};
+
+#if TARGET_IPHONE_SIMULATOR // rdar://problem/9219483
+#define DISPATCH_VM_PRESSURE DISPATCH_VNODE_ATTRIB
+#endif
+
+__BEGIN_DECLS
+
+#if TARGET_OS_MAC
/*!
* @typedef dispatch_mig_callback_t
*
* @abstract
* The signature of a function that handles Mach message delivery and response.
*/
-typedef boolean_t (*dispatch_mig_callback_t)(mach_msg_header_t *message, mach_msg_header_t *reply);
+typedef boolean_t (*dispatch_mig_callback_t)(mach_msg_header_t *message,
+ mach_msg_header_t *reply);
__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_4_0)
-DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
+DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
mach_msg_return_t
-dispatch_mig_server(dispatch_source_t ds, size_t maxmsgsz, dispatch_mig_callback_t callback);
+dispatch_mig_server(dispatch_source_t ds, size_t maxmsgsz,
+ dispatch_mig_callback_t callback);
#endif
-__DISPATCH_END_DECLS
+__END_DECLS
#endif
diff --git a/resolver/resolved.h b/resolver/resolved.h
new file mode 100644
index 0000000..bb9a82d
--- /dev/null
+++ b/resolver/resolved.h
@@ -0,0 +1,26 @@
+/*
+ * Copyright (c) 2010-2011 Apple Inc. All rights reserved.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_START@
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_END@
+ */
+
+/*
+ * IMPORTANT: This header file describes INTERNAL interfaces to libdispatch
+ * which are subject to change in future releases of Mac OS X. Any applications
+ * relying on these interfaces WILL break.
+ */
+
diff --git a/resolver/resolver.c b/resolver/resolver.c
new file mode 100644
index 0000000..8b390b4
--- /dev/null
+++ b/resolver/resolver.c
@@ -0,0 +1,20 @@
+/*
+ * Copyright (c) 2010 Apple Inc. All rights reserved.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_START@
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_END@
+ */
+
diff --git a/resolver/resolver.h b/resolver/resolver.h
new file mode 100644
index 0000000..5b1cd04
--- /dev/null
+++ b/resolver/resolver.h
@@ -0,0 +1,31 @@
+/*
+ * Copyright (c) 2010 Apple Inc. All rights reserved.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_START@
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_END@
+ */
+
+/*
+ * IMPORTANT: This header file describes INTERNAL interfaces to libdispatch
+ * which are subject to change in future releases of Mac OS X. Any applications
+ * relying on these interfaces WILL break.
+ */
+
+#ifndef __DISPATCH_RESOLVERS__
+#define __DISPATCH_RESOLVERS__
+
+
+#endif
diff --git a/src/Makefile.am b/src/Makefile.am
index 7c5dc35..20b2baa 100644
--- a/src/Makefile.am
+++ b/src/Makefile.am
@@ -3,55 +3,71 @@
#
lib_LTLIBRARIES=libdispatch.la
-noinst_LTLIBRARIES=libshims.la
-libdispatch_la_SOURCES= \
- apply.c \
- benchmark.c \
- object.c \
- once.c \
- queue.c \
- queue_kevent.c \
- semaphore.c \
- source.c \
- source_kevent.c \
- time.c
+libdispatch_la_SOURCES= \
+ apply.c \
+ benchmark.c \
+ data.c \
+ init.c \
+ io.c \
+ object.c \
+ once.c \
+ queue.c \
+ semaphore.c \
+ source.c \
+ time.c \
+ protocol.defs \
+ provider.d \
+ data_internal.h \
+ internal.h \
+ io_internal.h \
+ object_internal.h \
+ queue_internal.h \
+ semaphore_internal.h \
+ shims.h \
+ source_internal.h \
+ trace.h \
+ shims/atomic.h \
+ shims/getprogname.h \
+ shims/hw_config.h \
+ shims/malloc_zone.h \
+ shims/perfmon.h \
+ shims/time.h \
+ shims/tsd.h
-libshims_la_SOURCES= \
- shims/mach.c \
- shims/time.c \
- shims/tsd.c
+INCLUDES=-I$(top_builddir) -I$(top_srcdir) -I$(top_srcdir)/private \
+ @APPLE_LIBC_SOURCE_PATH@ @APPLE_LIBCLOSURE_SOURCE_PATH@ @APPLE_XNU_SOURCE_PATH@
-libdispatch_la_CFLAGS=-Wall
-INCLUDES=-I$(top_builddir) -I$(top_srcdir) \
- @APPLE_LIBC_SOURCE_PATH@ @APPLE_XNU_SOURCE_PATH@
-
+libdispatch_la_CFLAGS=-Wall $(VISIBILITY_FLAGS) $(OMIT_LEAF_FP_FLAGS)
libdispatch_la_CFLAGS+=$(MARCH_FLAGS) $(CBLOCKS_FLAGS) $(KQUEUE_CFLAGS)
-if USE_LEGACY_API
-libdispatch_la_SOURCES+= \
- legacy.c
+libdispatch_la_LDFLAGS=-avoid-version
+
+if HAVE_DARWIN_LD
+libdispatch_la_LDFLAGS+=-Wl,-compatibility_version,1 -Wl,-current_version,$(VERSION)
endif
-libdispatch_la_LIBADD=libshims.la $(KQUEUE_LIBS)
-libdispatch_la_DEPENDENCIES=libshims.la
-
-if USE_LIBPTHREAD_WORKQUEUE
-libdispatch_la_LIBADD+=-lpthread_workqueue
-endif
+CLEANFILES=
if USE_MIG
-libdispatch_la_SOURCES+= \
- protocolUser.c \
- protocolServer.c
-BUILT_SOURCES= \
- protocol.h \
- protocolUser.c \
- protocolServer.c \
+BUILT_SOURCES= \
+ protocolUser.c \
+ protocol.h \
+ protocolServer.c \
protocolServer.h
-CLEANFILES=$BUILT_SOURCES
-protocol.h protocolUser.c protocolServer.h protocolServer.c: protocol.defs
- $(MIG) -user protocolUser.c -header protocol.h \
- -server protocolServer.c -sheader protocolServer.h protocol.defs
+nodist_libdispatch_la_SOURCES=$(BUILT_SOURCES)
+CLEANFILES+=$(BUILT_SOURCES)
+
+%User.c %.h %Server.c %Server.h: $(abs_srcdir)/%.defs
+ $(MIG) -user $*User.c -header $*.h \
+ -server $*Server.c -sheader $*Server.h $<
+endif
+
+if USE_XNU_SOURCE
+# hack for pthread_machdep.h's #include <System/machine/cpu_capabilities.h>
+$(libdispatch_la_OBJECTS): $(abs_srcdir)/System
+$(abs_srcdir)/System:
+ $(LN_S) -fh "@APPLE_XNU_SOURCE_SYSTEM_PATH@" System
+CLEANFILES+=System
endif
diff --git a/src/apply.c b/src/apply.c
index 2c51eb2..9a63439 100644
--- a/src/apply.c
+++ b/src/apply.c
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
#include "internal.h"
@@ -24,81 +24,155 @@
// local thread to be sufficiently away to avoid cache-line contention with the
// busy 'da_index' variable.
//
-// NOTE: 'char' arrays cause GCC to insert buffer overflow detection logic
+// NOTE: 'char' arrays cause GCC to insert buffer overflow detection logic
struct dispatch_apply_s {
- long _da_pad0[DISPATCH_CACHELINE_SIZE / sizeof(long)];
- void (*da_func)(void *, size_t);
- void *da_ctxt;
- size_t da_iterations;
- size_t da_index;
- uint32_t da_thr_cnt;
- dispatch_semaphore_t da_sema;
- long _da_pad1[DISPATCH_CACHELINE_SIZE / sizeof(long)];
+ long _da_pad0[DISPATCH_CACHELINE_SIZE / sizeof(long)];
+ void (*da_func)(void *, size_t);
+ void *da_ctxt;
+ size_t da_iterations;
+ size_t da_index;
+ uint32_t da_thr_cnt;
+ _dispatch_thread_semaphore_t da_sema;
+ dispatch_queue_t da_queue;
+ long _da_pad1[DISPATCH_CACHELINE_SIZE / sizeof(long)];
};
-static void
-_dispatch_apply2(void *_ctxt)
+DISPATCH_ALWAYS_INLINE
+static inline void
+_dispatch_apply_invoke(void *ctxt)
{
- struct dispatch_apply_s *da = _ctxt;
+ struct dispatch_apply_s *da = ctxt;
size_t const iter = da->da_iterations;
typeof(da->da_func) const func = da->da_func;
- void *const ctxt = da->da_ctxt;
+ void *const da_ctxt = da->da_ctxt;
size_t idx;
_dispatch_workitem_dec(); // this unit executes many items
+ // Make nested dispatch_apply fall into serial case rdar://problem/9294578
+ _dispatch_thread_setspecific(dispatch_apply_key, (void*)~0ul);
// Striding is the responsibility of the caller.
- while (fastpath((idx = dispatch_atomic_inc(&da->da_index) - 1) < iter)) {
- func(ctxt, idx);
+ while (fastpath((idx = dispatch_atomic_inc2o(da, da_index) - 1) < iter)) {
+ _dispatch_client_callout2(da_ctxt, idx, func);
_dispatch_workitem_inc();
}
+ _dispatch_thread_setspecific(dispatch_apply_key, NULL);
- if (dispatch_atomic_dec(&da->da_thr_cnt) == 0) {
- dispatch_semaphore_signal(da->da_sema);
+ dispatch_atomic_release_barrier();
+ if (dispatch_atomic_dec2o(da, da_thr_cnt) == 0) {
+ _dispatch_thread_semaphore_signal(da->da_sema);
}
}
+DISPATCH_NOINLINE
static void
-_dispatch_apply_serial(void *context)
+_dispatch_apply2(void *ctxt)
{
- struct dispatch_apply_s *da = context;
+ _dispatch_apply_invoke(ctxt);
+}
+
+static void
+_dispatch_apply3(void *ctxt)
+{
+ struct dispatch_apply_s *da = ctxt;
+ dispatch_queue_t old_dq = _dispatch_thread_getspecific(dispatch_queue_key);
+
+ _dispatch_thread_setspecific(dispatch_queue_key, da->da_queue);
+ _dispatch_apply_invoke(ctxt);
+ _dispatch_thread_setspecific(dispatch_queue_key, old_dq);
+}
+
+static void
+_dispatch_apply_serial(void *ctxt)
+{
+ struct dispatch_apply_s *da = ctxt;
size_t idx = 0;
_dispatch_workitem_dec(); // this unit executes many items
do {
- da->da_func(da->da_ctxt, idx);
+ _dispatch_client_callout2(da->da_ctxt, idx, da->da_func);
_dispatch_workitem_inc();
} while (++idx < da->da_iterations);
}
-#ifdef __BLOCKS__
-void
-dispatch_apply(size_t iterations, dispatch_queue_t dq, void (^work)(size_t))
-{
- struct Block_basic *bb = (void *)work;
-
- dispatch_apply_f(iterations, dq, bb, (void *)bb->Block_invoke);
-}
-#endif
-
// 256 threads should be good enough for the short to mid term
-#define DISPATCH_APPLY_MAX_CPUS 256
+#define DISPATCH_APPLY_MAX_CPUS 256
-DISPATCH_NOINLINE
-void
-dispatch_apply_f(size_t iterations, dispatch_queue_t dq, void *ctxt, void (*func)(void *, size_t))
+DISPATCH_ALWAYS_INLINE
+static inline void
+_dispatch_apply_f2(dispatch_queue_t dq, struct dispatch_apply_s *da,
+ dispatch_function_t func)
{
struct dispatch_apply_dc_s {
DISPATCH_CONTINUATION_HEADER(dispatch_apply_dc_s);
} da_dc[DISPATCH_APPLY_MAX_CPUS];
- struct dispatch_apply_s da;
size_t i;
+ for (i = 0; i < da->da_thr_cnt - 1; i++) {
+ da_dc[i].do_vtable = NULL;
+ da_dc[i].do_next = &da_dc[i + 1];
+ da_dc[i].dc_func = func;
+ da_dc[i].dc_ctxt = da;
+ }
+
+ da->da_sema = _dispatch_get_thread_semaphore();
+
+ _dispatch_queue_push_list(dq, (void *)&da_dc[0],
+ (void *)&da_dc[da->da_thr_cnt - 2]);
+ // Call the first element directly
+ _dispatch_apply2(da);
+ _dispatch_workitem_inc();
+
+ _dispatch_thread_semaphore_wait(da->da_sema);
+ _dispatch_put_thread_semaphore(da->da_sema);
+}
+
+static void
+_dispatch_apply_redirect(void *ctxt)
+{
+ struct dispatch_apply_s *da = ctxt;
+ uint32_t da_width = 2 * (da->da_thr_cnt - 1);
+ dispatch_queue_t dq = da->da_queue, rq = dq, tq;
+
+ do {
+ uint32_t running = dispatch_atomic_add2o(rq, dq_running, da_width);
+ uint32_t width = rq->dq_width;
+ if (slowpath(running > width)) {
+ uint32_t excess = width > 1 ? running - width : da_width;
+ for (tq = dq; 1; tq = tq->do_targetq) {
+ (void)dispatch_atomic_sub2o(tq, dq_running, excess);
+ if (tq == rq) {
+ break;
+ }
+ }
+ da_width -= excess;
+ if (slowpath(!da_width)) {
+ return _dispatch_apply_serial(da);
+ }
+ da->da_thr_cnt -= excess / 2;
+ }
+ rq = rq->do_targetq;
+ } while (slowpath(rq->do_targetq));
+ _dispatch_apply_f2(rq, da, _dispatch_apply3);
+ do {
+ (void)dispatch_atomic_sub2o(dq, dq_running, da_width);
+ dq = dq->do_targetq;
+ } while (slowpath(dq->do_targetq));
+}
+
+DISPATCH_NOINLINE
+void
+dispatch_apply_f(size_t iterations, dispatch_queue_t dq, void *ctxt,
+ void (*func)(void *, size_t))
+{
+ struct dispatch_apply_s da;
+
da.da_func = func;
da.da_ctxt = ctxt;
da.da_iterations = iterations;
da.da_index = 0;
da.da_thr_cnt = _dispatch_hw_config.cc_max_active;
+ da.da_queue = NULL;
if (da.da_thr_cnt > DISPATCH_APPLY_MAX_CPUS) {
da.da_thr_cnt = DISPATCH_APPLY_MAX_CPUS;
@@ -109,46 +183,62 @@
if (iterations < da.da_thr_cnt) {
da.da_thr_cnt = (uint32_t)iterations;
}
- if (slowpath(dq->dq_width <= 2 || da.da_thr_cnt <= 1)) {
+ if (slowpath(dq->dq_width <= 2) || slowpath(da.da_thr_cnt <= 1) ||
+ slowpath(_dispatch_thread_getspecific(dispatch_apply_key))) {
return dispatch_sync_f(dq, &da, _dispatch_apply_serial);
}
-
- for (i = 0; i < da.da_thr_cnt; i++) {
- da_dc[i].do_vtable = NULL;
- da_dc[i].do_next = &da_dc[i + 1];
- da_dc[i].dc_func = _dispatch_apply2;
- da_dc[i].dc_ctxt = &da;
- }
-
- da.da_sema = _dispatch_get_thread_semaphore();
-
- // some queues are easy to borrow and some are not
+ dispatch_queue_t old_dq = _dispatch_thread_getspecific(dispatch_queue_key);
if (slowpath(dq->do_targetq)) {
- _dispatch_queue_push_list(dq, (void *)&da_dc[0], (void *)&da_dc[da.da_thr_cnt - 1]);
- } else {
- dispatch_queue_t old_dq = _dispatch_thread_getspecific(dispatch_queue_key);
- // root queues are always concurrent and safe to borrow
- _dispatch_queue_push_list(dq, (void *)&da_dc[1], (void *)&da_dc[da.da_thr_cnt - 1]);
- _dispatch_thread_setspecific(dispatch_queue_key, dq);
- // The first da_dc[] element was explicitly not pushed on to the queue.
- // We need to either call it like so:
- // da_dc[0].dc_func(da_dc[0].dc_ctxt);
- // Or, given that we know the 'func' and 'ctxt', we can call it directly:
- _dispatch_apply2(&da);
- _dispatch_workitem_inc();
- _dispatch_thread_setspecific(dispatch_queue_key, old_dq);
+ if (slowpath(dq == old_dq)) {
+ return dispatch_sync_f(dq, &da, _dispatch_apply_serial);
+ } else {
+ da.da_queue = dq;
+ return dispatch_sync_f(dq, &da, _dispatch_apply_redirect);
+ }
}
- dispatch_semaphore_wait(da.da_sema, DISPATCH_TIME_FOREVER);
- _dispatch_put_thread_semaphore(da.da_sema);
+ dispatch_atomic_acquire_barrier();
+ _dispatch_thread_setspecific(dispatch_queue_key, dq);
+ _dispatch_apply_f2(dq, &da, _dispatch_apply2);
+ _dispatch_thread_setspecific(dispatch_queue_key, old_dq);
}
+#ifdef __BLOCKS__
+#if DISPATCH_COCOA_COMPAT
+DISPATCH_NOINLINE
+static void
+_dispatch_apply_slow(size_t iterations, dispatch_queue_t dq,
+ void (^work)(size_t))
+{
+ struct Block_basic *bb = (void *)_dispatch_Block_copy((void *)work);
+ dispatch_apply_f(iterations, dq, bb, (void *)bb->Block_invoke);
+ Block_release(bb);
+}
+#endif
+
+void
+dispatch_apply(size_t iterations, dispatch_queue_t dq, void (^work)(size_t))
+{
+#if DISPATCH_COCOA_COMPAT
+ // Under GC, blocks transferred to other threads must be Block_copy()ed
+ // rdar://problem/7455071
+ if (dispatch_begin_thread_4GC) {
+ return _dispatch_apply_slow(iterations, dq, work);
+ }
+#endif
+ struct Block_basic *bb = (void *)work;
+ dispatch_apply_f(iterations, dq, bb, (void *)bb->Block_invoke);
+}
+#endif
+
#if 0
#ifdef __BLOCKS__
void
-dispatch_stride(size_t offset, size_t stride, size_t iterations, dispatch_queue_t dq, void (^work)(size_t))
+dispatch_stride(size_t offset, size_t stride, size_t iterations,
+ dispatch_queue_t dq, void (^work)(size_t))
{
struct Block_basic *bb = (void *)work;
- dispatch_stride_f(offset, stride, iterations, dq, bb, (void *)bb->Block_invoke);
+ dispatch_stride_f(offset, stride, iterations, dq, bb,
+ (void *)bb->Block_invoke);
}
#endif
diff --git a/src/benchmark.c b/src/benchmark.c
index e1f40dd..246affa 100644
--- a/src/benchmark.c
+++ b/src/benchmark.c
@@ -2,19 +2,19 @@
* Copyright (c) 2008-2009 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -34,7 +34,7 @@
static void
_dispatch_benchmark_init(void *context)
{
- struct __dispatch_benchmark_data_s *bdata = (struct __dispatch_benchmark_data_s *)context;
+ struct __dispatch_benchmark_data_s *bdata = context;
// try and simulate performance of real benchmark as much as possible
// keep 'f', 'c' and 'cnt' in registers
register void (*f)(void *) = bdata->func;
@@ -42,7 +42,7 @@
register size_t cnt = bdata->count;
size_t i = 0;
uint64_t start, delta;
-#ifdef __LP64__
+#if defined(__LP64__)
__uint128_t lcost;
#else
long double lcost;
@@ -75,21 +75,22 @@
uint64_t
dispatch_benchmark(size_t count, void (^block)(void))
{
- struct Block_basic *bb = (struct Block_basic *)(void *)block;
- return dispatch_benchmark_f(count, block, (dispatch_function_t)bb->Block_invoke);
+ struct Block_basic *bb = (void *)block;
+ return dispatch_benchmark_f(count, block, (void *)bb->Block_invoke);
}
#endif
uint64_t
-dispatch_benchmark_f(size_t count, register void *ctxt, register void (*func)(void *))
+dispatch_benchmark_f(size_t count, register void *ctxt,
+ register void (*func)(void *))
{
static struct __dispatch_benchmark_data_s bdata = {
- .func = (dispatch_function_t)dummy_function,
+ .func = (void *)dummy_function,
.count = 10000000ul, // ten million
};
static dispatch_once_t pred;
uint64_t ns, start, delta;
-#ifdef __LP64__
+#if defined(__LP64__)
__uint128_t conversion, big_denom;
#else
long double conversion, big_denom;
diff --git a/src/data.c b/src/data.c
new file mode 100644
index 0000000..e125656
--- /dev/null
+++ b/src/data.c
@@ -0,0 +1,429 @@
+/*
+ * Copyright (c) 2009-2011 Apple Inc. All rights reserved.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_START@
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_END@
+ */
+
+#include "internal.h"
+
+// Dispatch data objects are dispatch objects with standard retain/release
+// memory management. A dispatch data object either points to a number of other
+// dispatch data objects or is a leaf data object. A leaf data object contains
+// a pointer to represented memory. A composite data object specifies the total
+// size of data it represents and list of constituent records.
+//
+// A leaf data object has a single entry in records[], the object size is the
+// same as records[0].length and records[0].from is always 0. In other words, a
+// leaf data object always points to a full represented buffer, so a composite
+// dispatch data object is needed to represent a subrange of a memory region.
+
+#define _dispatch_data_retain(x) dispatch_retain(x)
+#define _dispatch_data_release(x) dispatch_release(x)
+
+static void _dispatch_data_dispose(dispatch_data_t data);
+static size_t _dispatch_data_debug(dispatch_data_t data, char* buf,
+ size_t bufsiz);
+
+#if DISPATCH_DATA_MOVABLE
+static const dispatch_block_t _dispatch_data_destructor_unlock = ^{
+ DISPATCH_CRASH("unlock destructor called");
+};
+#define DISPATCH_DATA_DESTRUCTOR_UNLOCK (_dispatch_data_destructor_unlock)
+#endif
+
+const struct dispatch_data_vtable_s _dispatch_data_vtable = {
+ .do_type = DISPATCH_DATA_TYPE,
+ .do_kind = "data",
+ .do_dispose = _dispatch_data_dispose,
+ .do_invoke = NULL,
+ .do_probe = (void *)dummy_function_r0,
+ .do_debug = _dispatch_data_debug,
+};
+
+static dispatch_data_t
+_dispatch_data_init(size_t n)
+{
+ dispatch_data_t data = calloc(1ul, sizeof(struct dispatch_data_s) +
+ n * sizeof(range_record));
+ data->num_records = n;
+ data->do_vtable = &_dispatch_data_vtable;
+ data->do_xref_cnt = 1;
+ data->do_ref_cnt = 1;
+ data->do_targetq = dispatch_get_global_queue(
+ DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
+ data->do_next = DISPATCH_OBJECT_LISTLESS;
+ return data;
+}
+
+dispatch_data_t
+dispatch_data_create(const void* buffer, size_t size, dispatch_queue_t queue,
+ dispatch_block_t destructor)
+{
+ dispatch_data_t data;
+ if (!buffer || !size) {
+ // Empty data requested so return the singleton empty object. Call
+ // destructor immediately in this case to ensure any unused associated
+ // storage is released.
+ if (destructor == DISPATCH_DATA_DESTRUCTOR_FREE) {
+ free((void*)buffer);
+ } else if (destructor != DISPATCH_DATA_DESTRUCTOR_DEFAULT) {
+ dispatch_async(queue ? queue : dispatch_get_global_queue(
+ DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), destructor);
+ }
+ return dispatch_data_empty;
+ }
+ data = _dispatch_data_init(1);
+ // Leaf objects always point to the entirety of the memory region
+ data->leaf = true;
+ data->size = size;
+ data->records[0].from = 0;
+ data->records[0].length = size;
+ data->destructor = DISPATCH_DATA_DESTRUCTOR_FREE;
+ if (destructor == DISPATCH_DATA_DESTRUCTOR_DEFAULT) {
+ // The default destructor was provided, indicating the data should be
+ // copied.
+ void *data_buf = malloc(size);
+ if (slowpath(!data_buf)) {
+ free(data);
+ return NULL;
+ }
+ buffer = memcpy(data_buf, buffer, size);
+ } else {
+ if (destructor != DISPATCH_DATA_DESTRUCTOR_FREE) {
+ data->destructor = Block_copy(destructor);
+ }
+#if DISPATCH_DATA_MOVABLE
+ // A non-default destructor was provided, indicating the system does not
+ // own the buffer. Mark the object as locked since the application has
+ // direct access to the buffer and it cannot be reallocated/moved.
+ data->locked = 1;
+#endif
+ }
+ data->records[0].data_object = (void*)buffer;
+ if (queue) {
+ _dispatch_retain(queue);
+ data->do_targetq = queue;
+ }
+ return data;
+}
+
+static void
+_dispatch_data_dispose(dispatch_data_t dd)
+{
+ dispatch_block_t destructor = dd->destructor;
+ if (destructor == DISPATCH_DATA_DESTRUCTOR_DEFAULT) {
+ size_t i;
+ for (i = 0; i < dd->num_records; ++i) {
+ _dispatch_data_release(dd->records[i].data_object);
+ }
+#if DISPATCH_DATA_MOVABLE
+ } else if (destructor == DISPATCH_DATA_DESTRUCTOR_UNLOCK) {
+ dispatch_data_t data = (dispatch_data_t)dd->records[0].data_object;
+ (void)dispatch_atomic_dec2o(data, locked);
+ _dispatch_data_release(data);
+#endif
+ } else if (destructor == DISPATCH_DATA_DESTRUCTOR_FREE) {
+ free(dd->records[0].data_object);
+ } else {
+ dispatch_async_f(dd->do_targetq, destructor,
+ _dispatch_call_block_and_release);
+ }
+ _dispatch_dispose(dd);
+}
+
+static size_t
+_dispatch_data_debug(dispatch_data_t dd, char* buf, size_t bufsiz)
+{
+ size_t offset = 0;
+ if (dd->leaf) {
+ offset += snprintf(&buf[offset], bufsiz - offset,
+ "leaf: %d, size: %zd, data: %p", dd->leaf, dd->size,
+ dd->records[0].data_object);
+ } else {
+ offset += snprintf(&buf[offset], bufsiz - offset,
+ "leaf: %d, size: %zd, num_records: %zd", dd->leaf,
+ dd->size, dd->num_records);
+ size_t i;
+ for (i = 0; i < dd->num_records; ++i) {
+ range_record r = dd->records[i];
+ offset += snprintf(&buf[offset], bufsiz - offset,
+ "records[%zd] from: %zd, length %zd, data_object: %p", i,
+ r.from, r.length, r.data_object);
+ }
+ }
+ return offset;
+}
+
+size_t
+dispatch_data_get_size(dispatch_data_t dd)
+{
+ return dd->size;
+}
+
+dispatch_data_t
+dispatch_data_create_concat(dispatch_data_t dd1, dispatch_data_t dd2)
+{
+ dispatch_data_t data;
+ if (!dd1->size) {
+ _dispatch_data_retain(dd2);
+ return dd2;
+ }
+ if (!dd2->size) {
+ _dispatch_data_retain(dd1);
+ return dd1;
+ }
+ data = _dispatch_data_init(dd1->num_records + dd2->num_records);
+ data->size = dd1->size + dd2->size;
+ // Copy the constituent records into the newly created data object
+ memcpy(data->records, dd1->records, dd1->num_records *
+ sizeof(range_record));
+ memcpy(data->records + dd1->num_records, dd2->records, dd2->num_records *
+ sizeof(range_record));
+ // Reference leaf objects as sub-objects
+ if (dd1->leaf) {
+ data->records[0].data_object = dd1;
+ }
+ if (dd2->leaf) {
+ data->records[dd1->num_records].data_object = dd2;
+ }
+ size_t i;
+ for (i = 0; i < data->num_records; ++i) {
+ _dispatch_data_retain(data->records[i].data_object);
+ }
+ return data;
+}
+
+dispatch_data_t
+dispatch_data_create_subrange(dispatch_data_t dd, size_t offset,
+ size_t length)
+{
+ dispatch_data_t data;
+ if (offset >= dd->size || !length) {
+ return dispatch_data_empty;
+ } else if ((offset + length) > dd->size) {
+ length = dd->size - offset;
+ } else if (length == dd->size) {
+ _dispatch_data_retain(dd);
+ return dd;
+ }
+ if (dd->leaf) {
+ data = _dispatch_data_init(1);
+ data->size = length;
+ data->records[0].from = offset;
+ data->records[0].length = length;
+ data->records[0].data_object = dd;
+ _dispatch_data_retain(dd);
+ return data;
+ }
+ // Subrange of a composite dispatch data object: find the record containing
+ // the specified offset
+ data = dispatch_data_empty;
+ size_t i = 0, bytes_left = length;
+ while (i < dd->num_records && offset >= dd->records[i].length) {
+ offset -= dd->records[i++].length;
+ }
+ while (i < dd->num_records) {
+ size_t record_len = dd->records[i].length - offset;
+ if (record_len > bytes_left) {
+ record_len = bytes_left;
+ }
+ dispatch_data_t subrange = dispatch_data_create_subrange(
+ dd->records[i].data_object, dd->records[i].from + offset,
+ record_len);
+ dispatch_data_t concat = dispatch_data_create_concat(data, subrange);
+ _dispatch_data_release(data);
+ _dispatch_data_release(subrange);
+ data = concat;
+ bytes_left -= record_len;
+ if (!bytes_left) {
+ return data;
+ }
+ offset = 0;
+ i++;
+ }
+ // Crashing here indicates memory corruption of passed in data object
+ DISPATCH_CRASH("dispatch_data_create_subrange out of bounds");
+ return NULL;
+}
+
+// When mapping a leaf object or a subrange of a leaf object, return a direct
+// pointer to the represented buffer. For all other data objects, copy the
+// represented buffers into a contiguous area. In the future it might
+// be possible to relocate the buffers instead (if not marked as locked).
+dispatch_data_t
+dispatch_data_create_map(dispatch_data_t dd, const void **buffer_ptr,
+ size_t *size_ptr)
+{
+ dispatch_data_t data = dd;
+ void *buffer = NULL;
+ size_t size = dd->size, offset = 0;
+ if (!size) {
+ data = dispatch_data_empty;
+ goto out;
+ }
+ if (!dd->leaf && dd->num_records == 1 &&
+ ((dispatch_data_t)dd->records[0].data_object)->leaf) {
+ offset = dd->records[0].from;
+ dd = (dispatch_data_t)(dd->records[0].data_object);
+ }
+ if (dd->leaf) {
+#if DISPATCH_DATA_MOVABLE
+ data = _dispatch_data_init(1);
+ // Make sure the underlying leaf object does not move the backing buffer
+ (void)dispatch_atomic_inc2o(dd, locked);
+ data->size = size;
+ data->destructor = DISPATCH_DATA_DESTRUCTOR_UNLOCK;
+ data->records[0].data_object = dd;
+ data->records[0].from = offset;
+ data->records[0].length = size;
+ _dispatch_data_retain(dd);
+#else
+ _dispatch_data_retain(data);
+#endif
+ buffer = dd->records[0].data_object + offset;
+ goto out;
+ }
+ // Composite data object, copy the represented buffers
+ buffer = malloc(size);
+ if (!buffer) {
+ data = NULL;
+ size = 0;
+ goto out;
+ }
+ dispatch_data_apply(dd, ^(dispatch_data_t region DISPATCH_UNUSED,
+ size_t off, const void* buf, size_t len) {
+ memcpy(buffer + off, buf, len);
+ return (bool)true;
+ });
+ data = dispatch_data_create(buffer, size, NULL,
+ DISPATCH_DATA_DESTRUCTOR_FREE);
+out:
+ if (buffer_ptr) {
+ *buffer_ptr = buffer;
+ }
+ if (size_ptr) {
+ *size_ptr = size;
+ }
+ return data;
+}
+
+static bool
+_dispatch_data_apply(dispatch_data_t dd, size_t offset, size_t from,
+ size_t size, dispatch_data_applier_t applier)
+{
+ bool result = true;
+ dispatch_data_t data = dd;
+ const void *buffer;
+ dispatch_assert(dd->size);
+#if DISPATCH_DATA_MOVABLE
+ if (dd->leaf) {
+ data = _dispatch_data_init(1);
+ // Make sure the underlying leaf object does not move the backing buffer
+ (void)dispatch_atomic_inc2o(dd, locked);
+ data->size = size;
+ data->destructor = DISPATCH_DATA_DESTRUCTOR_UNLOCK;
+ data->records[0].data_object = dd;
+ data->records[0].from = from;
+ data->records[0].length = size;
+ _dispatch_data_retain(dd);
+ buffer = dd->records[0].data_object + from;
+ result = applier(data, offset, buffer, size);
+ _dispatch_data_release(data);
+ return result;
+ }
+#else
+ if (!dd->leaf && dd->num_records == 1 &&
+ ((dispatch_data_t)dd->records[0].data_object)->leaf) {
+ from = dd->records[0].from;
+ dd = (dispatch_data_t)(dd->records[0].data_object);
+ }
+ if (dd->leaf) {
+ buffer = dd->records[0].data_object + from;
+ return applier(data, offset, buffer, size);
+ }
+#endif
+ size_t i;
+ for (i = 0; i < dd->num_records && result; ++i) {
+ result = _dispatch_data_apply(dd->records[i].data_object,
+ offset, dd->records[i].from, dd->records[i].length,
+ applier);
+ offset += dd->records[i].length;
+ }
+ return result;
+}
+
+bool
+dispatch_data_apply(dispatch_data_t dd, dispatch_data_applier_t applier)
+{
+ if (!dd->size) {
+ return true;
+ }
+ return _dispatch_data_apply(dd, 0, 0, dd->size, applier);
+}
+
+// Returs either a leaf object or an object composed of a single leaf object
+dispatch_data_t
+dispatch_data_copy_region(dispatch_data_t dd, size_t location,
+ size_t *offset_ptr)
+{
+ if (location >= dd->size) {
+ *offset_ptr = 0;
+ return dispatch_data_empty;
+ }
+ dispatch_data_t data;
+ size_t size = dd->size, offset = 0, from = 0;
+ while (true) {
+ if (dd->leaf) {
+ _dispatch_data_retain(dd);
+ *offset_ptr = offset;
+ if (size == dd->size) {
+ return dd;
+ } else {
+ // Create a new object for the requested subrange of the leaf
+ data = _dispatch_data_init(1);
+ data->size = size;
+ data->records[0].from = from;
+ data->records[0].length = size;
+ data->records[0].data_object = dd;
+ return data;
+ }
+ } else {
+ // Find record at the specified location
+ size_t i, pos;
+ for (i = 0; i < dd->num_records; ++i) {
+ pos = offset + dd->records[i].length;
+ if (location < pos) {
+ size = dd->records[i].length;
+ from = dd->records[i].from;
+ data = (dispatch_data_t)(dd->records[i].data_object);
+ if (dd->num_records == 1 && data->leaf) {
+ // Return objects composed of a single leaf node
+ *offset_ptr = offset;
+ _dispatch_data_retain(dd);
+ return dd;
+ } else {
+ // Drill down into other objects
+ dd = data;
+ break;
+ }
+ } else {
+ offset = pos;
+ }
+ }
+ }
+ }
+}
diff --git a/src/data_internal.h b/src/data_internal.h
new file mode 100644
index 0000000..314efa7
--- /dev/null
+++ b/src/data_internal.h
@@ -0,0 +1,58 @@
+/*
+ * Copyright (c) 2009-2011 Apple Inc. All rights reserved.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_START@
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_END@
+ */
+
+/*
+ * IMPORTANT: This header file describes INTERNAL interfaces to libdispatch
+ * which are subject to change in future releases of Mac OS X. Any applications
+ * relying on these interfaces WILL break.
+ */
+
+#ifndef __DISPATCH_DATA_INTERNAL__
+#define __DISPATCH_DATA_INTERNAL__
+
+#ifndef __DISPATCH_INDIRECT__
+#error "Please #include <dispatch/dispatch.h> instead of this file directly."
+#include <dispatch/base.h> // for HeaderDoc
+#endif
+
+struct dispatch_data_vtable_s {
+ DISPATCH_VTABLE_HEADER(dispatch_data_s);
+};
+
+extern const struct dispatch_data_vtable_s _dispatch_data_vtable;
+
+typedef struct range_record_s {
+ void* data_object;
+ size_t from;
+ size_t length;
+} range_record;
+
+struct dispatch_data_s {
+ DISPATCH_STRUCT_HEADER(dispatch_data_s, dispatch_data_vtable_s);
+#if DISPATCH_DATA_MOVABLE
+ unsigned int locked;
+#endif
+ bool leaf;
+ dispatch_block_t destructor;
+ size_t size, num_records;
+ range_record records[];
+};
+
+#endif // __DISPATCH_DATA_INTERNAL__
diff --git a/src/init.c b/src/init.c
new file mode 100644
index 0000000..d72219c
--- /dev/null
+++ b/src/init.c
@@ -0,0 +1,622 @@
+/*
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_START@
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_END@
+ */
+
+// Contains exported global data and initialization & other routines that must
+// only exist once in the shared library even when resolvers are used.
+
+#include "internal.h"
+
+#if HAVE_MACH
+#include "protocolServer.h"
+#endif
+
+#pragma mark -
+#pragma mark dispatch_init
+
+#if USE_LIBDISPATCH_INIT_CONSTRUCTOR
+DISPATCH_NOTHROW __attribute__((constructor))
+void
+_libdispatch_init(void);
+
+DISPATCH_EXPORT DISPATCH_NOTHROW
+void
+_libdispatch_init(void)
+{
+ libdispatch_init();
+}
+#endif
+
+DISPATCH_EXPORT DISPATCH_NOTHROW
+void
+dispatch_atfork_prepare(void)
+{
+}
+
+DISPATCH_EXPORT DISPATCH_NOTHROW
+void
+dispatch_atfork_parent(void)
+{
+}
+
+void
+dummy_function(void)
+{
+}
+
+long
+dummy_function_r0(void)
+{
+ return 0;
+}
+
+#pragma mark -
+#pragma mark dispatch_globals
+
+#if DISPATCH_COCOA_COMPAT
+// dispatch_begin_thread_4GC having non-default value triggers GC-only slow
+// paths and is checked frequently, testing against NULL is faster than
+// comparing for equality with "dummy_function"
+void (*dispatch_begin_thread_4GC)(void) = NULL;
+void (*dispatch_end_thread_4GC)(void) = dummy_function;
+void (*dispatch_no_worker_threads_4GC)(void) = NULL;
+void *(*_dispatch_begin_NSAutoReleasePool)(void) = (void *)dummy_function;
+void (*_dispatch_end_NSAutoReleasePool)(void *) = (void *)dummy_function;
+#endif
+
+struct _dispatch_hw_config_s _dispatch_hw_config;
+bool _dispatch_safe_fork = true;
+
+const struct dispatch_queue_offsets_s dispatch_queue_offsets = {
+ .dqo_version = 3,
+ .dqo_label = offsetof(struct dispatch_queue_s, dq_label),
+ .dqo_label_size = sizeof(((dispatch_queue_t)NULL)->dq_label),
+ .dqo_flags = 0,
+ .dqo_flags_size = 0,
+ .dqo_width = offsetof(struct dispatch_queue_s, dq_width),
+ .dqo_width_size = sizeof(((dispatch_queue_t)NULL)->dq_width),
+ .dqo_serialnum = offsetof(struct dispatch_queue_s, dq_serialnum),
+ .dqo_serialnum_size = sizeof(((dispatch_queue_t)NULL)->dq_serialnum),
+ .dqo_running = offsetof(struct dispatch_queue_s, dq_running),
+ .dqo_running_size = sizeof(((dispatch_queue_t)NULL)->dq_running),
+};
+
+// 6618342 Contact the team that owns the Instrument DTrace probe before
+// renaming this symbol
+DISPATCH_CACHELINE_ALIGN
+struct dispatch_queue_s _dispatch_main_q = {
+#if !DISPATCH_USE_RESOLVERS
+ .do_vtable = &_dispatch_queue_vtable,
+ .do_targetq = &_dispatch_root_queues[
+ DISPATCH_ROOT_QUEUE_IDX_DEFAULT_OVERCOMMIT_PRIORITY],
+#endif
+ .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_suspend_cnt = DISPATCH_OBJECT_SUSPEND_LOCK,
+ .dq_label = "com.apple.main-thread",
+ .dq_running = 1,
+ .dq_width = 1,
+ .dq_serialnum = 1,
+};
+
+const struct dispatch_queue_attr_vtable_s dispatch_queue_attr_vtable = {
+ .do_type = DISPATCH_QUEUE_ATTR_TYPE,
+ .do_kind = "queue-attr",
+};
+
+struct dispatch_queue_attr_s _dispatch_queue_attr_concurrent = {
+ .do_vtable = &dispatch_queue_attr_vtable,
+ .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_next = DISPATCH_OBJECT_LISTLESS,
+};
+
+struct dispatch_data_s _dispatch_data_empty = {
+#if !DISPATCH_USE_RESOLVERS
+ .do_vtable = &_dispatch_data_vtable,
+#endif
+ .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_next = DISPATCH_OBJECT_LISTLESS,
+};
+
+const dispatch_block_t _dispatch_data_destructor_free = ^{
+ DISPATCH_CRASH("free destructor called");
+};
+
+#pragma mark -
+#pragma mark dispatch_log
+
+static char _dispatch_build[16];
+
+static void
+_dispatch_bug_init(void *context DISPATCH_UNUSED)
+{
+#ifdef __APPLE__
+ int mib[] = { CTL_KERN, KERN_OSVERSION };
+ size_t bufsz = sizeof(_dispatch_build);
+
+ sysctl(mib, 2, _dispatch_build, &bufsz, NULL, 0);
+#else
+ /*
+ * XXXRW: What to do here for !Mac OS X?
+ */
+ memset(_dispatch_build, 0, sizeof(_dispatch_build));
+#endif
+}
+
+void
+_dispatch_bug(size_t line, long val)
+{
+ static dispatch_once_t pred;
+ static void *last_seen;
+ void *ra = __builtin_return_address(0);
+
+ dispatch_once_f(&pred, NULL, _dispatch_bug_init);
+ if (last_seen != ra) {
+ last_seen = ra;
+ _dispatch_log("BUG in libdispatch: %s - %lu - 0x%lx",
+ _dispatch_build, (unsigned long)line, val);
+ }
+}
+
+void
+_dispatch_bug_mach_client(const char* msg, mach_msg_return_t kr)
+{
+ static void *last_seen;
+ void *ra = __builtin_return_address(0);
+ if (last_seen != ra) {
+ last_seen = ra;
+ _dispatch_log("BUG in libdispatch client: %s %s - 0x%x", msg,
+ mach_error_string(kr), kr);
+ }
+}
+
+void
+_dispatch_abort(size_t line, long val)
+{
+ _dispatch_bug(line, val);
+ abort();
+}
+
+void
+_dispatch_log(const char *msg, ...)
+{
+ va_list ap;
+
+ va_start(ap, msg);
+ _dispatch_logv(msg, ap);
+ va_end(ap);
+}
+
+static FILE *dispatch_logfile;
+static bool dispatch_log_disabled;
+
+static void
+_dispatch_logv_init(void *context DISPATCH_UNUSED)
+{
+#if DISPATCH_DEBUG
+ bool log_to_file = true;
+#else
+ bool log_to_file = false;
+#endif
+ char *e = getenv("LIBDISPATCH_LOG");
+ if (e) {
+ if (strcmp(e, "YES") == 0) {
+ // default
+ } else if (strcmp(e, "NO") == 0) {
+ dispatch_log_disabled = true;
+ } else if (strcmp(e, "syslog") == 0) {
+ log_to_file = false;
+ } else if (strcmp(e, "file") == 0) {
+ log_to_file = true;
+ } else if (strcmp(e, "stderr") == 0) {
+ log_to_file = true;
+ dispatch_logfile = stderr;
+ }
+ }
+ if (!dispatch_log_disabled) {
+ if (log_to_file && !dispatch_logfile) {
+ char path[PATH_MAX];
+ snprintf(path, sizeof(path), "/var/tmp/libdispatch.%d.log",
+ getpid());
+ dispatch_logfile = fopen(path, "a");
+ }
+ if (dispatch_logfile) {
+ struct timeval tv;
+ gettimeofday(&tv, NULL);
+ fprintf(dispatch_logfile, "=== log file opened for %s[%u] at "
+ "%ld.%06u ===\n", getprogname() ?: "", getpid(),
+ tv.tv_sec, tv.tv_usec);
+ fflush(dispatch_logfile);
+ }
+ }
+}
+
+void
+_dispatch_logv(const char *msg, va_list ap)
+{
+ static dispatch_once_t pred;
+ dispatch_once_f(&pred, NULL, _dispatch_logv_init);
+
+ if (slowpath(dispatch_log_disabled)) {
+ return;
+ }
+ if (slowpath(dispatch_logfile)) {
+ vfprintf(dispatch_logfile, msg, ap);
+ // TODO: May cause interleaving with another thread's log
+ fputc('\n', dispatch_logfile);
+ fflush(dispatch_logfile);
+ return;
+ }
+ vsyslog(LOG_NOTICE, msg, ap);
+}
+
+#pragma mark -
+#pragma mark dispatch_debug
+
+void
+dispatch_debug(dispatch_object_t dou, const char *msg, ...)
+{
+ va_list ap;
+
+ va_start(ap, msg);
+ dispatch_debugv(dou._do, msg, ap);
+ va_end(ap);
+}
+
+void
+dispatch_debugv(dispatch_object_t dou, const char *msg, va_list ap)
+{
+ char buf[4096];
+ size_t offs;
+
+ if (dou._do && dou._do->do_vtable->do_debug) {
+ offs = dx_debug(dou._do, buf, sizeof(buf));
+ } else {
+ offs = snprintf(buf, sizeof(buf), "NULL vtable slot");
+ }
+
+ snprintf(buf + offs, sizeof(buf) - offs, ": %s", msg);
+ _dispatch_logv(buf, ap);
+}
+
+#pragma mark -
+#pragma mark dispatch_block_t
+
+#ifdef __BLOCKS__
+
+#undef _dispatch_Block_copy
+dispatch_block_t
+_dispatch_Block_copy(dispatch_block_t db)
+{
+ dispatch_block_t rval;
+
+ while (!(rval = Block_copy(db))) {
+ sleep(1);
+ }
+ return rval;
+}
+
+void
+_dispatch_call_block_and_release(void *block)
+{
+ void (^b)(void) = block;
+ b();
+ Block_release(b);
+}
+
+#endif // __BLOCKS__
+
+#pragma mark -
+#pragma mark dispatch_client_callout
+
+#if DISPATCH_USE_CLIENT_CALLOUT
+
+#undef _dispatch_client_callout
+#undef _dispatch_client_callout2
+
+DISPATCH_NOINLINE
+void
+_dispatch_client_callout(void *ctxt, dispatch_function_t f)
+{
+ return f(ctxt);
+}
+
+DISPATCH_NOINLINE
+void
+_dispatch_client_callout2(void *ctxt, size_t i, void (*f)(void *, size_t))
+{
+ return f(ctxt, i);
+}
+
+#endif
+
+#pragma mark -
+#pragma mark dispatch_source_types
+
+static void
+dispatch_source_type_timer_init(dispatch_source_t ds,
+ dispatch_source_type_t type DISPATCH_UNUSED,
+ uintptr_t handle DISPATCH_UNUSED,
+ unsigned long mask,
+ dispatch_queue_t q DISPATCH_UNUSED)
+{
+ ds->ds_refs = calloc(1ul, sizeof(struct dispatch_timer_source_refs_s));
+ if (slowpath(!ds->ds_refs)) return;
+ ds->ds_needs_rearm = true;
+ ds->ds_is_timer = true;
+ ds_timer(ds->ds_refs).flags = mask;
+}
+
+const struct dispatch_source_type_s _dispatch_source_type_timer = {
+ .ke = {
+ .filter = DISPATCH_EVFILT_TIMER,
+ },
+ .mask = DISPATCH_TIMER_WALL_CLOCK,
+ .init = dispatch_source_type_timer_init,
+};
+
+const struct dispatch_source_type_s _dispatch_source_type_read = {
+ .ke = {
+ .filter = EVFILT_READ,
+ .flags = EV_DISPATCH,
+ },
+};
+
+const struct dispatch_source_type_s _dispatch_source_type_write = {
+ .ke = {
+ .filter = EVFILT_WRITE,
+ .flags = EV_DISPATCH,
+ },
+};
+
+#if DISPATCH_USE_VM_PRESSURE
+#if TARGET_IPHONE_SIMULATOR // rdar://problem/9219483
+static int _dispatch_ios_simulator_memory_warnings_fd = -1;
+static void
+_dispatch_ios_simulator_vm_source_init(void *context DISPATCH_UNUSED)
+{
+ char *e = getenv("IPHONE_SIMULATOR_MEMORY_WARNINGS");
+ if (!e) return;
+ _dispatch_ios_simulator_memory_warnings_fd = open(e, O_EVTONLY);
+ if (_dispatch_ios_simulator_memory_warnings_fd == -1) {
+ (void)dispatch_assume_zero(errno);
+ }
+}
+static void
+dispatch_source_type_vm_init(dispatch_source_t ds,
+ dispatch_source_type_t type DISPATCH_UNUSED,
+ uintptr_t handle DISPATCH_UNUSED,
+ unsigned long mask,
+ dispatch_queue_t q DISPATCH_UNUSED)
+{
+ static dispatch_once_t pred;
+ dispatch_once_f(&pred, NULL, _dispatch_ios_simulator_vm_source_init);
+ ds->ds_dkev->dk_kevent.ident = (mask & DISPATCH_VM_PRESSURE ?
+ _dispatch_ios_simulator_memory_warnings_fd : -1);
+}
+
+const struct dispatch_source_type_s _dispatch_source_type_vm = {
+ .ke = {
+ .filter = EVFILT_VNODE,
+ .flags = EV_CLEAR,
+ },
+ .mask = NOTE_ATTRIB,
+ .init = dispatch_source_type_vm_init,
+};
+#else
+static void
+dispatch_source_type_vm_init(dispatch_source_t ds,
+ dispatch_source_type_t type DISPATCH_UNUSED,
+ uintptr_t handle DISPATCH_UNUSED,
+ unsigned long mask DISPATCH_UNUSED,
+ dispatch_queue_t q DISPATCH_UNUSED)
+{
+ ds->ds_is_level = false;
+}
+
+const struct dispatch_source_type_s _dispatch_source_type_vm = {
+ .ke = {
+ .filter = EVFILT_VM,
+ .flags = EV_DISPATCH,
+ },
+ .mask = NOTE_VM_PRESSURE,
+ .init = dispatch_source_type_vm_init,
+};
+#endif
+#endif
+
+const struct dispatch_source_type_s _dispatch_source_type_proc = {
+ .ke = {
+ .filter = EVFILT_PROC,
+ .flags = EV_CLEAR,
+ },
+ .mask = NOTE_EXIT|NOTE_FORK|NOTE_EXEC
+#if HAVE_DECL_NOTE_SIGNAL
+ |NOTE_SIGNAL
+#endif
+#if HAVE_DECL_NOTE_REAP
+ |NOTE_REAP
+#endif
+ ,
+};
+
+const struct dispatch_source_type_s _dispatch_source_type_signal = {
+ .ke = {
+ .filter = EVFILT_SIGNAL,
+ },
+};
+
+const struct dispatch_source_type_s _dispatch_source_type_vnode = {
+ .ke = {
+ .filter = EVFILT_VNODE,
+ .flags = EV_CLEAR,
+ },
+ .mask = NOTE_DELETE|NOTE_WRITE|NOTE_EXTEND|NOTE_ATTRIB|NOTE_LINK|
+ NOTE_RENAME|NOTE_REVOKE
+#if HAVE_DECL_NOTE_NONE
+ |NOTE_NONE
+#endif
+ ,
+};
+
+const struct dispatch_source_type_s _dispatch_source_type_vfs = {
+ .ke = {
+ .filter = EVFILT_FS,
+ .flags = EV_CLEAR,
+ },
+ .mask = VQ_NOTRESP|VQ_NEEDAUTH|VQ_LOWDISK|VQ_MOUNT|VQ_UNMOUNT|VQ_DEAD|
+ VQ_ASSIST|VQ_NOTRESPLOCK
+#if HAVE_DECL_VQ_UPDATE
+ |VQ_UPDATE
+#endif
+#if HAVE_DECL_VQ_VERYLOWDISK
+ |VQ_VERYLOWDISK
+#endif
+ ,
+};
+
+const struct dispatch_source_type_s _dispatch_source_type_data_add = {
+ .ke = {
+ .filter = DISPATCH_EVFILT_CUSTOM_ADD,
+ },
+};
+
+const struct dispatch_source_type_s _dispatch_source_type_data_or = {
+ .ke = {
+ .filter = DISPATCH_EVFILT_CUSTOM_OR,
+ .flags = EV_CLEAR,
+ .fflags = ~0,
+ },
+};
+
+#if HAVE_MACH
+
+static void
+dispatch_source_type_mach_send_init(dispatch_source_t ds,
+ dispatch_source_type_t type DISPATCH_UNUSED,
+ uintptr_t handle DISPATCH_UNUSED, unsigned long mask,
+ dispatch_queue_t q DISPATCH_UNUSED)
+{
+ static dispatch_once_t pred;
+ dispatch_once_f(&pred, NULL, _dispatch_mach_notify_source_init);
+ if (!mask) {
+ // Preserve legacy behavior that (mask == 0) => DISPATCH_MACH_SEND_DEAD
+ ds->ds_dkev->dk_kevent.fflags = DISPATCH_MACH_SEND_DEAD;
+ ds->ds_pending_data_mask = DISPATCH_MACH_SEND_DEAD;
+ }
+}
+
+const struct dispatch_source_type_s _dispatch_source_type_mach_send = {
+ .ke = {
+ .filter = EVFILT_MACHPORT,
+ .flags = EV_CLEAR,
+ },
+ .mask = DISPATCH_MACH_SEND_DEAD|DISPATCH_MACH_SEND_POSSIBLE,
+ .init = dispatch_source_type_mach_send_init,
+};
+
+static void
+dispatch_source_type_mach_recv_init(dispatch_source_t ds,
+ dispatch_source_type_t type DISPATCH_UNUSED,
+ uintptr_t handle DISPATCH_UNUSED,
+ unsigned long mask DISPATCH_UNUSED,
+ dispatch_queue_t q DISPATCH_UNUSED)
+{
+ ds->ds_is_level = false;
+}
+
+const struct dispatch_source_type_s _dispatch_source_type_mach_recv = {
+ .ke = {
+ .filter = EVFILT_MACHPORT,
+ .flags = EV_DISPATCH,
+ .fflags = DISPATCH_MACH_RECV_MESSAGE,
+ },
+ .init = dispatch_source_type_mach_recv_init,
+};
+
+#pragma mark -
+#pragma mark dispatch_mig
+
+void *
+dispatch_mach_msg_get_context(mach_msg_header_t *msg)
+{
+ mach_msg_context_trailer_t *tp;
+ void *context = NULL;
+
+ tp = (mach_msg_context_trailer_t *)((uint8_t *)msg +
+ round_msg(msg->msgh_size));
+ if (tp->msgh_trailer_size >=
+ (mach_msg_size_t)sizeof(mach_msg_context_trailer_t)) {
+ context = (void *)(uintptr_t)tp->msgh_context;
+ }
+ return context;
+}
+
+kern_return_t
+_dispatch_wakeup_main_thread(mach_port_t mp DISPATCH_UNUSED)
+{
+ // dummy function just to pop out the main thread out of mach_msg()
+ return 0;
+}
+
+kern_return_t
+_dispatch_consume_send_once_right(mach_port_t mp DISPATCH_UNUSED)
+{
+ // dummy function to consume a send-once right
+ return 0;
+}
+
+kern_return_t
+_dispatch_mach_notify_port_destroyed(mach_port_t notify DISPATCH_UNUSED,
+ mach_port_t name)
+{
+ kern_return_t kr;
+ // this function should never be called
+ (void)dispatch_assume_zero(name);
+ kr = mach_port_mod_refs(mach_task_self(), name, MACH_PORT_RIGHT_RECEIVE,-1);
+ DISPATCH_VERIFY_MIG(kr);
+ (void)dispatch_assume_zero(kr);
+ return KERN_SUCCESS;
+}
+
+kern_return_t
+_dispatch_mach_notify_no_senders(mach_port_t notify,
+ mach_port_mscount_t mscnt DISPATCH_UNUSED)
+{
+ // this function should never be called
+ (void)dispatch_assume_zero(notify);
+ return KERN_SUCCESS;
+}
+
+kern_return_t
+_dispatch_mach_notify_send_once(mach_port_t notify DISPATCH_UNUSED)
+{
+ // we only register for dead-name notifications
+ // some code deallocated our send-once right without consuming it
+#if DISPATCH_DEBUG
+ _dispatch_log("Corruption: An app/library deleted a libdispatch "
+ "dead-name notification");
+#endif
+ return KERN_SUCCESS;
+}
+
+
+#endif // HAVE_MACH
diff --git a/src/internal.h b/src/internal.h
index a69b36a..24d3a04 100644
--- a/src/internal.h
+++ b/src/internal.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -27,62 +27,33 @@
#ifndef __DISPATCH_INTERNAL__
#define __DISPATCH_INTERNAL__
-#include "config/config.h"
-
-#ifdef __APPLE__
-#include <TargetConditionals.h>
-#endif
-
-#if TARGET_OS_WIN32
-// Include Win32 headers early in order to minimize the
-// likelihood of name pollution from dispatch headers.
-
-#ifndef WINVER
-#define WINVER 0x502
-#endif
-
-#ifndef _WIN32_WINNT
-#define _WIN32_WINNT 0x502
-#endif
-
-#ifndef _MSC_VER
-#define _MSC_VER 1400
-#pragma warning(disable:4159)
-#endif
-
-#define WIN32_LEAN_AND_MEAN 1
-#define _CRT_SECURE_NO_DEPRECATE 1
-#define _CRT_SECURE_NO_WARNINGS 1
-
-#define BOOL WINBOOL
-#include <Windows.h>
-#undef BOOL
-
-#endif /* TARGET_OS_WIN32 */
+#include <config/config.h>
#define __DISPATCH_BUILDING_DISPATCH__
#define __DISPATCH_INDIRECT__
-#include "dispatch/dispatch.h"
-#include "dispatch/base.h"
-#include "dispatch/time.h"
-#include "dispatch/queue.h"
-#include "dispatch/object.h"
-#include "dispatch/source.h"
-#include "dispatch/group.h"
-#include "dispatch/semaphore.h"
-#include "dispatch/once.h"
-#include "dispatch/benchmark.h"
+
+#include <dispatch/dispatch.h>
+#include <dispatch/base.h>
+
+
+#include <dispatch/time.h>
+#include <dispatch/queue.h>
+#include <dispatch/object.h>
+#include <dispatch/source.h>
+#include <dispatch/group.h>
+#include <dispatch/semaphore.h>
+#include <dispatch/once.h>
+#include <dispatch/data.h>
+#include <dispatch/io.h>
/* private.h uses #include_next and must be included last to avoid picking
* up installed headers. */
#include "queue_private.h"
#include "source_private.h"
+#include "benchmark.h"
#include "private.h"
-#ifndef DISPATCH_NO_LEGACY
-#include "legacy.h"
-#endif
/* More #includes at EOF (dependent on the contents of internal.h) ... */
/* The "_debug" library build */
@@ -90,6 +61,17 @@
#define DISPATCH_DEBUG 0
#endif
+#ifndef DISPATCH_PROFILE
+#define DISPATCH_PROFILE 0
+#endif
+
+#if DISPATCH_DEBUG && !defined(DISPATCH_USE_CLIENT_CALLOUT)
+#define DISPATCH_USE_CLIENT_CALLOUT 1
+#endif
+
+#if (DISPATCH_DEBUG || DISPATCH_PROFILE) && !defined(DISPATCH_USE_DTRACE)
+#define DISPATCH_USE_DTRACE 1
+#endif
#if HAVE_LIBKERN_OSCROSSENDIAN_H
#include <libkern/OSCrossEndian.h>
@@ -116,20 +98,16 @@
#if HAVE_MALLOC_MALLOC_H
#include <malloc/malloc.h>
#endif
+#include <sys/event.h>
#include <sys/mount.h>
#include <sys/queue.h>
#include <sys/stat.h>
-#if HAVE_SYS_SYSCTL_H
#include <sys/sysctl.h>
-#endif
#include <sys/socket.h>
#include <sys/time.h>
#include <netinet/in.h>
#ifdef __BLOCKS__
-#if TARGET_OS_WIN32
-#define BLOCK_EXPORT extern "C" __declspec(dllexport)
-#endif /* TARGET_OS_WIN32 */
#include <Block_private.h>
#include <Block.h>
#endif /* __BLOCKS__ */
@@ -154,7 +132,28 @@
#include <unistd.h>
#endif
-#define DISPATCH_NOINLINE __attribute__((noinline))
+#ifndef __has_builtin
+#define __has_builtin(x) 0
+#endif
+#ifndef __has_include
+#define __has_include(x) 0
+#endif
+#ifndef __has_feature
+#define __has_feature(x) 0
+#endif
+#ifndef __has_attribute
+#define __has_attribute(x) 0
+#endif
+
+#define DISPATCH_NOINLINE __attribute__((__noinline__))
+#define DISPATCH_USED __attribute__((__used__))
+#define DISPATCH_UNUSED __attribute__((__unused__))
+#define DISPATCH_WEAK __attribute__((__weak__))
+#if DISPATCH_DEBUG
+#define DISPATCH_ALWAYS_INLINE_NDEBUG
+#else
+#define DISPATCH_ALWAYS_INLINE_NDEBUG __attribute__((__always_inline__))
+#endif
// workaround 6368156
#ifdef NSEC_PER_SEC
@@ -171,207 +170,256 @@
#define NSEC_PER_USEC 1000ull
/* I wish we had __builtin_expect_range() */
-#if __GNUC__
-#define fastpath(x) ((typeof(x))__builtin_expect((long)(x), ~0l))
-#define slowpath(x) ((typeof(x))__builtin_expect((long)(x), 0l))
-#else
-#define fastpath(x) (x)
-#define slowpath(x) (x)
-#endif
+#define fastpath(x) ((typeof(x))__builtin_expect((long)(x), ~0l))
+#define slowpath(x) ((typeof(x))__builtin_expect((long)(x), 0l))
-void _dispatch_bug(size_t line, long val) __attribute__((__noinline__));
-void _dispatch_abort(size_t line, long val) __attribute__((__noinline__,__noreturn__));
-void _dispatch_log(const char *msg, ...) __attribute__((__noinline__,__format__(printf,1,2)));
-void _dispatch_logv(const char *msg, va_list) __attribute__((__noinline__,__format__(printf,1,0)));
+DISPATCH_NOINLINE
+void _dispatch_bug(size_t line, long val);
+DISPATCH_NOINLINE
+void _dispatch_bug_mach_client(const char *msg, mach_msg_return_t kr);
+DISPATCH_NOINLINE DISPATCH_NORETURN
+void _dispatch_abort(size_t line, long val);
+DISPATCH_NOINLINE __attribute__((__format__(printf,1,2)))
+void _dispatch_log(const char *msg, ...);
+DISPATCH_NOINLINE __attribute__((__format__(printf,1,0)))
+void _dispatch_logv(const char *msg, va_list);
/*
- * For reporting bugs within libdispatch when using the "_debug" version of the library.
+ * For reporting bugs within libdispatch when using the "_debug" version of the
+ * library.
*/
-#define dispatch_assert(e) do { \
- if (__builtin_constant_p(e)) { \
- char __compile_time_assert__[(bool)(e) ? 1 : -1] __attribute__((unused)); \
- } else { \
- typeof(e) _e = fastpath(e); /* always eval 'e' */ \
- if (DISPATCH_DEBUG && !_e) { \
- _dispatch_abort(__LINE__, (long)_e); \
- } \
- } \
+#define dispatch_assert(e) do { \
+ if (__builtin_constant_p(e)) { \
+ char __compile_time_assert__[(bool)(e) ? 1 : -1] DISPATCH_UNUSED; \
+ } else { \
+ typeof(e) _e = fastpath(e); /* always eval 'e' */ \
+ if (DISPATCH_DEBUG && !_e) { \
+ _dispatch_abort(__LINE__, (long)_e); \
+ } \
+ } \
} while (0)
-/* A lot of API return zero upon success and not-zero on fail. Let's capture and log the non-zero value */
-#define dispatch_assert_zero(e) do { \
- if (__builtin_constant_p(e)) { \
- char __compile_time_assert__[(bool)(!(e)) ? 1 : -1] __attribute__((unused)); \
- } else { \
- typeof(e) _e = slowpath(e); /* always eval 'e' */ \
- if (DISPATCH_DEBUG && _e) { \
- _dispatch_abort(__LINE__, (long)_e); \
- } \
- } \
+/*
+ * A lot of API return zero upon success and not-zero on fail. Let's capture
+ * and log the non-zero value
+ */
+#define dispatch_assert_zero(e) do { \
+ if (__builtin_constant_p(e)) { \
+ char __compile_time_assert__[(bool)(e) ? -1 : 1] DISPATCH_UNUSED; \
+ } else { \
+ typeof(e) _e = slowpath(e); /* always eval 'e' */ \
+ if (DISPATCH_DEBUG && _e) { \
+ _dispatch_abort(__LINE__, (long)_e); \
+ } \
+ } \
} while (0)
/*
- * For reporting bugs or impedance mismatches between libdispatch and external subsystems.
- * These do NOT abort(), and are always compiled into the product.
+ * For reporting bugs or impedance mismatches between libdispatch and external
+ * subsystems. These do NOT abort(), and are always compiled into the product.
*
* In particular, we wrap all system-calls with assume() macros.
*/
-#define dispatch_assume(e) ({ \
- typeof(e) _e = fastpath(e); /* always eval 'e' */ \
- if (!_e) { \
- if (__builtin_constant_p(e)) { \
- char __compile_time_assert__[(e) ? 1 : -1]; \
- (void)__compile_time_assert__; \
- } \
- _dispatch_bug(__LINE__, (long)_e); \
- } \
- _e; \
+#define dispatch_assume(e) ({ \
+ typeof(e) _e = fastpath(e); /* always eval 'e' */ \
+ if (!_e) { \
+ if (__builtin_constant_p(e)) { \
+ char __compile_time_assert__[(bool)(e) ? 1 : -1]; \
+ (void)__compile_time_assert__; \
+ } \
+ _dispatch_bug(__LINE__, (long)_e); \
+ } \
+ _e; \
})
-/* A lot of API return zero upon success and not-zero on fail. Let's capture and log the non-zero value */
-#define dispatch_assume_zero(e) ({ \
- typeof(e) _e = slowpath(e); /* always eval 'e' */ \
- if (_e) { \
- if (__builtin_constant_p(e)) { \
- char __compile_time_assert__[(e) ? -1 : 1]; \
- (void)__compile_time_assert__; \
- } \
- _dispatch_bug(__LINE__, (long)_e); \
- } \
- _e; \
+/*
+ * A lot of API return zero upon success and not-zero on fail. Let's capture
+ * and log the non-zero value
+ */
+#define dispatch_assume_zero(e) ({ \
+ typeof(e) _e = slowpath(e); /* always eval 'e' */ \
+ if (_e) { \
+ if (__builtin_constant_p(e)) { \
+ char __compile_time_assert__[(bool)(e) ? -1 : 1]; \
+ (void)__compile_time_assert__; \
+ } \
+ _dispatch_bug(__LINE__, (long)_e); \
+ } \
+ _e; \
})
/*
* For reporting bugs in clients when using the "_debug" version of the library.
*/
-#define dispatch_debug_assert(e, msg, args...) do { \
- if (__builtin_constant_p(e)) { \
- char __compile_time_assert__[(bool)(e) ? 1 : -1] __attribute__((unused)); \
- } else { \
- typeof(e) _e = fastpath(e); /* always eval 'e' */ \
- if (DISPATCH_DEBUG && !_e) { \
- _dispatch_log("%s() 0x%lx: " msg, __func__, (long)_e, ##args); \
- abort(); \
- } \
- } \
+#define dispatch_debug_assert(e, msg, args...) do { \
+ if (__builtin_constant_p(e)) { \
+ char __compile_time_assert__[(bool)(e) ? 1 : -1] DISPATCH_UNUSED; \
+ } else { \
+ typeof(e) _e = fastpath(e); /* always eval 'e' */ \
+ if (DISPATCH_DEBUG && !_e) { \
+ _dispatch_log("%s() 0x%lx: " msg, __func__, (long)_e, ##args); \
+ abort(); \
+ } \
+ } \
} while (0)
-#if __GNUC__
-#define DO_CAST(x) ((struct dispatch_object_s *)(x)._do)
+/* Make sure the debug statments don't get too stale */
+#define _dispatch_debug(x, args...) \
+({ \
+ if (DISPATCH_DEBUG) { \
+ _dispatch_log("libdispatch: %u\t%p\t" x, __LINE__, \
+ (void *)_dispatch_thread_self(), ##args); \
+ } \
+})
+
+#if DISPATCH_DEBUG
+#if HAVE_MACH
+DISPATCH_NOINLINE DISPATCH_USED
+void dispatch_debug_machport(mach_port_t name, const char* str);
+#endif
+DISPATCH_NOINLINE DISPATCH_USED
+void dispatch_debug_kevents(struct kevent* kev, size_t count, const char* str);
#else
-#define DO_CAST(x) ((struct dispatch_object_s *)(x))
+static inline void
+dispatch_debug_kevents(struct kevent* kev DISPATCH_UNUSED,
+ size_t count DISPATCH_UNUSED,
+ const char* str DISPATCH_UNUSED) {}
+#endif
+
+#if DISPATCH_USE_CLIENT_CALLOUT
+
+DISPATCH_NOTHROW void
+_dispatch_client_callout(void *ctxt, dispatch_function_t f);
+DISPATCH_NOTHROW void
+_dispatch_client_callout2(void *ctxt, size_t i, void (*f)(void *, size_t));
+
+#else
+
+DISPATCH_ALWAYS_INLINE
+static inline void
+_dispatch_client_callout(void *ctxt, dispatch_function_t f)
+{
+ return f(ctxt);
+}
+
+DISPATCH_ALWAYS_INLINE
+static inline void
+_dispatch_client_callout2(void *ctxt, size_t i, void (*f)(void *, size_t))
+{
+ return f(ctxt, i);
+}
+
#endif
#ifdef __BLOCKS__
+DISPATCH_ALWAYS_INLINE
+static inline void
+_dispatch_client_callout_block(dispatch_block_t b)
+{
+ struct Block_basic *bb = (void*)b;
+ return _dispatch_client_callout(b, (dispatch_function_t)bb->Block_invoke);
+}
+
dispatch_block_t _dispatch_Block_copy(dispatch_block_t block);
+#define _dispatch_Block_copy(x) ((typeof(x))_dispatch_Block_copy(x))
void _dispatch_call_block_and_release(void *block);
-void _dispatch_call_block_and_release2(void *block, void *ctxt);
#endif /* __BLOCKS__ */
void dummy_function(void);
long dummy_function_r0(void);
+void _dispatch_source_drain_kevent(struct kevent *);
-/* Make sure the debug statments don't get too stale */
-#define _dispatch_debug(x, args...) \
-({ \
- if (DISPATCH_DEBUG) { \
- _dispatch_log("libdispatch: %u\t%p\t" x, __LINE__, \
- (void *)_dispatch_thread_self(), ##args); \
- } \
-})
-
-
-uint64_t _dispatch_get_nanoseconds(void);
-
-#ifndef DISPATCH_NO_LEGACY
-dispatch_source_t
-_dispatch_source_create2(dispatch_source_t ds,
- dispatch_source_attr_t attr,
- void *context,
- dispatch_source_handler_function_t handler);
-#endif
-
+long _dispatch_update_kq(const struct kevent *);
void _dispatch_run_timers(void);
// Returns howsoon with updated time value, or NULL if no timers active.
struct timespec *_dispatch_get_next_timer_fire(struct timespec *howsoon);
-dispatch_semaphore_t _dispatch_get_thread_semaphore(void);
-void _dispatch_put_thread_semaphore(dispatch_semaphore_t);
-
bool _dispatch_source_testcancel(dispatch_source_t);
uint64_t _dispatch_timeout(dispatch_time_t when);
-#if USE_POSIX_SEM
-struct timespec _dispatch_timeout_ts(dispatch_time_t when);
-#endif
-__private_extern__ bool _dispatch_safe_fork;
+extern bool _dispatch_safe_fork;
-__private_extern__ struct _dispatch_hw_config_s {
+extern struct _dispatch_hw_config_s {
uint32_t cc_max_active;
uint32_t cc_max_logical;
uint32_t cc_max_physical;
} _dispatch_hw_config;
/* #includes dependent on internal.h */
+#include "shims.h"
#include "object_internal.h"
-#include "hw_shims.h"
-#include "os_shims.h"
#include "queue_internal.h"
#include "semaphore_internal.h"
#include "source_internal.h"
+#include "data_internal.h"
+#include "io_internal.h"
+#include "trace.h"
-#if USE_APPLE_CRASHREPORTER_INFO
+// SnowLeopard and iOS Simulator fallbacks
+
+#if HAVE_PTHREAD_WORKQUEUES
+#if !defined(WORKQ_BG_PRIOQUEUE) || \
+ (TARGET_IPHONE_SIMULATOR && __MAC_OS_X_VERSION_MIN_REQUIRED < 1070)
+#undef WORKQ_BG_PRIOQUEUE
+#define WORKQ_BG_PRIOQUEUE WORKQ_LOW_PRIOQUEUE
+#endif
+#endif // HAVE_PTHREAD_WORKQUEUES
+
+#if HAVE_MACH
+#if !defined(MACH_NOTIFY_SEND_POSSIBLE) || \
+ (TARGET_IPHONE_SIMULATOR && __MAC_OS_X_VERSION_MIN_REQUIRED < 1070)
+#undef MACH_NOTIFY_SEND_POSSIBLE
+#define MACH_NOTIFY_SEND_POSSIBLE MACH_NOTIFY_DEAD_NAME
+#endif
+#endif // HAVE_MACH
+
+#ifdef EVFILT_VM
+#if TARGET_IPHONE_SIMULATOR && __MAC_OS_X_VERSION_MIN_REQUIRED < 1070
+#undef DISPATCH_USE_MALLOC_VM_PRESSURE_SOURCE
+#define DISPATCH_USE_MALLOC_VM_PRESSURE_SOURCE 0
+#endif
+#ifndef DISPATCH_USE_VM_PRESSURE
+#define DISPATCH_USE_VM_PRESSURE 1
+#endif
+#ifndef DISPATCH_USE_MALLOC_VM_PRESSURE_SOURCE
+#define DISPATCH_USE_MALLOC_VM_PRESSURE_SOURCE 1
+#endif
+#endif // EVFILT_VM
+
+#if defined(F_SETNOSIGPIPE) && defined(F_GETNOSIGPIPE)
+#if TARGET_IPHONE_SIMULATOR && __MAC_OS_X_VERSION_MIN_REQUIRED < 1070
+#undef DISPATCH_USE_SETNOSIGPIPE
+#define DISPATCH_USE_SETNOSIGPIPE 0
+#endif
+#ifndef DISPATCH_USE_SETNOSIGPIPE
+#define DISPATCH_USE_SETNOSIGPIPE 1
+#endif
+#endif // F_SETNOSIGPIPE
+
+
+#define _dispatch_set_crash_log_message(x)
#if HAVE_MACH
// MIG_REPLY_MISMATCH means either:
-// 1) A signal handler is NOT using async-safe API. See the sigaction(2) man page for more info.
+// 1) A signal handler is NOT using async-safe API. See the sigaction(2) man
+// page for more info.
// 2) A hand crafted call to mach_msg*() screwed up. Use MIG.
-#define DISPATCH_VERIFY_MIG(x) do { \
- if ((x) == MIG_REPLY_MISMATCH) { \
- __crashreporter_info__ = "MIG_REPLY_MISMATCH"; \
- _dispatch_hardware_crash(); \
- } \
+#define DISPATCH_VERIFY_MIG(x) do { \
+ if ((x) == MIG_REPLY_MISMATCH) { \
+ _dispatch_set_crash_log_message("MIG_REPLY_MISMATCH"); \
+ _dispatch_hardware_crash(); \
+ } \
} while (0)
#endif
-#if defined(__x86_64__) || defined(__i386__)
-// total hack to ensure that return register of a function is not trashed
-#define DISPATCH_CRASH(x) do { \
- asm("mov %1, %0" : "=m" (__crashreporter_info__) : "c" ("BUG IN LIBDISPATCH: " x)); \
- _dispatch_hardware_crash(); \
+#define DISPATCH_CRASH(x) do { \
+ _dispatch_set_crash_log_message("BUG IN LIBDISPATCH: " x); \
+ _dispatch_hardware_crash(); \
} while (0)
-#define DISPATCH_CLIENT_CRASH(x) do { \
- asm("mov %1, %0" : "=m" (__crashreporter_info__) : "c" ("BUG IN CLIENT OF LIBDISPATCH: " x)); \
- _dispatch_hardware_crash(); \
+#define DISPATCH_CLIENT_CRASH(x) do { \
+ _dispatch_set_crash_log_message("BUG IN CLIENT OF LIBDISPATCH: " x); \
+ _dispatch_hardware_crash(); \
} while (0)
-#else /* !(defined(__x86_64__) || defined(__i386__)) */
-
-#define DISPATCH_CRASH(x) do { \
- __crashreporter_info__ = "BUG IN LIBDISPATCH: " x; \
- _dispatch_hardware_crash(); \
- } while (0)
-
-#define DISPATCH_CLIENT_CRASH(x) do { \
- __crashreporter_info__ = "BUG IN CLIENT OF LIBDISPATCH: " x; \
- _dispatch_hardware_crash(); \
- } while (0)
-#endif /* defined(__x86_64__) || defined(__i386__) */
-
-#else /* !USE_APPLE_CRASHREPORTER_INFO */
-
-#if HAVE_MACH
-#define DISPATCH_VERIFY_MIG(x) do { \
- if ((x) == MIG_REPLY_MISMATCH) { \
- _dispatch_hardware_crash(); \
- } \
- } while (0)
-#endif
-
-#define DISPATCH_CRASH(x) _dispatch_hardware_crash()
-#define DISPATCH_CLIENT_CRASH(x) _dispatch_hardware_crash()
-
-#endif /* USE_APPLE_CRASHREPORTER_INFO */
-
#endif /* __DISPATCH_INTERNAL__ */
diff --git a/src/io.c b/src/io.c
new file mode 100644
index 0000000..b306054
--- /dev/null
+++ b/src/io.c
@@ -0,0 +1,2155 @@
+/*
+ * Copyright (c) 2009-2011 Apple Inc. All rights reserved.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_START@
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_END@
+ */
+
+#include "internal.h"
+
+typedef void (^dispatch_fd_entry_init_callback_t)(dispatch_fd_entry_t fd_entry);
+
+DISPATCH_EXPORT DISPATCH_NOTHROW
+void _dispatch_iocntl(uint32_t param, uint64_t value);
+
+static void _dispatch_io_dispose(dispatch_io_t channel);
+static dispatch_operation_t _dispatch_operation_create(
+ dispatch_op_direction_t direction, dispatch_io_t channel, off_t offset,
+ size_t length, dispatch_data_t data, dispatch_queue_t queue,
+ dispatch_io_handler_t handler);
+static void _dispatch_operation_dispose(dispatch_operation_t operation);
+static void _dispatch_operation_enqueue(dispatch_operation_t op,
+ dispatch_op_direction_t direction, dispatch_data_t data);
+static dispatch_source_t _dispatch_operation_timer(dispatch_queue_t tq,
+ dispatch_operation_t op);
+static inline void _dispatch_fd_entry_retain(dispatch_fd_entry_t fd_entry);
+static inline void _dispatch_fd_entry_release(dispatch_fd_entry_t fd_entry);
+static void _dispatch_fd_entry_init_async(dispatch_fd_t fd,
+ dispatch_fd_entry_init_callback_t completion_callback);
+static dispatch_fd_entry_t _dispatch_fd_entry_create_with_fd(dispatch_fd_t fd,
+ uintptr_t hash);
+static dispatch_fd_entry_t _dispatch_fd_entry_create_with_path(
+ dispatch_io_path_data_t path_data, dev_t dev, mode_t mode);
+static int _dispatch_fd_entry_open(dispatch_fd_entry_t fd_entry,
+ dispatch_io_t channel);
+static void _dispatch_fd_entry_cleanup_operations(dispatch_fd_entry_t fd_entry,
+ dispatch_io_t channel);
+static void _dispatch_stream_init(dispatch_fd_entry_t fd_entry,
+ dispatch_queue_t tq);
+static void _dispatch_stream_dispose(dispatch_fd_entry_t fd_entry,
+ dispatch_op_direction_t direction);
+static void _dispatch_disk_init(dispatch_fd_entry_t fd_entry, dev_t dev);
+static void _dispatch_disk_dispose(dispatch_disk_t disk);
+static void _dispatch_stream_enqueue_operation(dispatch_stream_t stream,
+ dispatch_operation_t operation, dispatch_data_t data);
+static void _dispatch_disk_enqueue_operation(dispatch_disk_t dsk,
+ dispatch_operation_t operation, dispatch_data_t data);
+static void _dispatch_stream_cleanup_operations(dispatch_stream_t stream,
+ dispatch_io_t channel);
+static void _dispatch_disk_cleanup_operations(dispatch_disk_t disk,
+ dispatch_io_t channel);
+static void _dispatch_stream_source_handler(void *ctx);
+static void _dispatch_stream_handler(void *ctx);
+static void _dispatch_disk_handler(void *ctx);
+static void _dispatch_disk_perform(void *ctxt);
+static void _dispatch_operation_advise(dispatch_operation_t op,
+ size_t chunk_size);
+static int _dispatch_operation_perform(dispatch_operation_t op);
+static void _dispatch_operation_deliver_data(dispatch_operation_t op,
+ dispatch_op_flags_t flags);
+
+// Macros to wrap syscalls which return -1 on error, and retry on EINTR
+#define _dispatch_io_syscall_switch_noerr(_err, _syscall, ...) do { \
+ switch (((_err) = (((_syscall) == -1) ? errno : 0))) { \
+ case EINTR: continue; \
+ __VA_ARGS__ \
+ } \
+ } while (0)
+#define _dispatch_io_syscall_switch(__err, __syscall, ...) do { \
+ _dispatch_io_syscall_switch_noerr(__err, __syscall, \
+ case 0: break; \
+ __VA_ARGS__ \
+ ); \
+ } while (0)
+#define _dispatch_io_syscall(__syscall) do { int __err; \
+ _dispatch_io_syscall_switch(__err, __syscall); \
+ } while (0)
+
+enum {
+ DISPATCH_OP_COMPLETE = 1,
+ DISPATCH_OP_DELIVER,
+ DISPATCH_OP_DELIVER_AND_COMPLETE,
+ DISPATCH_OP_COMPLETE_RESUME,
+ DISPATCH_OP_RESUME,
+ DISPATCH_OP_ERR,
+ DISPATCH_OP_FD_ERR,
+};
+
+#pragma mark -
+#pragma mark dispatch_io_vtable
+
+static const struct dispatch_io_vtable_s _dispatch_io_vtable = {
+ .do_type = DISPATCH_IO_TYPE,
+ .do_kind = "channel",
+ .do_dispose = _dispatch_io_dispose,
+ .do_invoke = NULL,
+ .do_probe = (void *)dummy_function_r0,
+ .do_debug = (void *)dummy_function_r0,
+};
+
+static const struct dispatch_operation_vtable_s _dispatch_operation_vtable = {
+ .do_type = DISPATCH_OPERATION_TYPE,
+ .do_kind = "operation",
+ .do_dispose = _dispatch_operation_dispose,
+ .do_invoke = NULL,
+ .do_probe = (void *)dummy_function_r0,
+ .do_debug = (void *)dummy_function_r0,
+};
+
+static const struct dispatch_disk_vtable_s _dispatch_disk_vtable = {
+ .do_type = DISPATCH_DISK_TYPE,
+ .do_kind = "disk",
+ .do_dispose = _dispatch_disk_dispose,
+ .do_invoke = NULL,
+ .do_probe = (void *)dummy_function_r0,
+ .do_debug = (void *)dummy_function_r0,
+};
+
+#pragma mark -
+#pragma mark dispatch_io_hashtables
+
+#if TARGET_OS_EMBEDDED
+#define DIO_HASH_SIZE 64u // must be a power of two
+#else
+#define DIO_HASH_SIZE 256u // must be a power of two
+#endif
+#define DIO_HASH(x) ((uintptr_t)((x) & (DIO_HASH_SIZE - 1)))
+
+// Global hashtable of dev_t -> disk_s mappings
+DISPATCH_CACHELINE_ALIGN
+static TAILQ_HEAD(, dispatch_disk_s) _dispatch_io_devs[DIO_HASH_SIZE];
+// Global hashtable of fd -> fd_entry_s mappings
+DISPATCH_CACHELINE_ALIGN
+static TAILQ_HEAD(, dispatch_fd_entry_s) _dispatch_io_fds[DIO_HASH_SIZE];
+
+static dispatch_once_t _dispatch_io_devs_lockq_pred;
+static dispatch_queue_t _dispatch_io_devs_lockq;
+static dispatch_queue_t _dispatch_io_fds_lockq;
+
+static void
+_dispatch_io_fds_lockq_init(void *context DISPATCH_UNUSED)
+{
+ _dispatch_io_fds_lockq = dispatch_queue_create(
+ "com.apple.libdispatch-io.fd_lockq", NULL);
+ unsigned int i;
+ for (i = 0; i < DIO_HASH_SIZE; i++) {
+ TAILQ_INIT(&_dispatch_io_fds[i]);
+ }
+}
+
+static void
+_dispatch_io_devs_lockq_init(void *context DISPATCH_UNUSED)
+{
+ _dispatch_io_devs_lockq = dispatch_queue_create(
+ "com.apple.libdispatch-io.dev_lockq", NULL);
+ unsigned int i;
+ for (i = 0; i < DIO_HASH_SIZE; i++) {
+ TAILQ_INIT(&_dispatch_io_devs[i]);
+ }
+}
+
+#pragma mark -
+#pragma mark dispatch_io_defaults
+
+enum {
+ DISPATCH_IOCNTL_CHUNK_PAGES = 1,
+ DISPATCH_IOCNTL_LOW_WATER_CHUNKS,
+ DISPATCH_IOCNTL_INITIAL_DELIVERY,
+ DISPATCH_IOCNTL_MAX_PENDING_IO_REQS,
+};
+
+static struct dispatch_io_defaults_s {
+ size_t chunk_pages, low_water_chunks, max_pending_io_reqs;
+ bool initial_delivery;
+} dispatch_io_defaults = {
+ .chunk_pages = DIO_MAX_CHUNK_PAGES,
+ .low_water_chunks = DIO_DEFAULT_LOW_WATER_CHUNKS,
+ .max_pending_io_reqs = DIO_MAX_PENDING_IO_REQS,
+};
+
+#define _dispatch_iocntl_set_default(p, v) do { \
+ dispatch_io_defaults.p = (typeof(dispatch_io_defaults.p))(v); \
+ } while (0)
+
+void
+_dispatch_iocntl(uint32_t param, uint64_t value)
+{
+ switch (param) {
+ case DISPATCH_IOCNTL_CHUNK_PAGES:
+ _dispatch_iocntl_set_default(chunk_pages, value);
+ break;
+ case DISPATCH_IOCNTL_LOW_WATER_CHUNKS:
+ _dispatch_iocntl_set_default(low_water_chunks, value);
+ break;
+ case DISPATCH_IOCNTL_INITIAL_DELIVERY:
+ _dispatch_iocntl_set_default(initial_delivery, value);
+ case DISPATCH_IOCNTL_MAX_PENDING_IO_REQS:
+ _dispatch_iocntl_set_default(max_pending_io_reqs, value);
+ break;
+ }
+}
+
+#pragma mark -
+#pragma mark dispatch_io_t
+
+static dispatch_io_t
+_dispatch_io_create(dispatch_io_type_t type)
+{
+ dispatch_io_t channel = calloc(1ul, sizeof(struct dispatch_io_s));
+ channel->do_vtable = &_dispatch_io_vtable;
+ channel->do_next = DISPATCH_OBJECT_LISTLESS;
+ channel->do_ref_cnt = 1;
+ channel->do_xref_cnt = 1;
+ channel->do_targetq = _dispatch_get_root_queue(0, true);
+ channel->params.type = type;
+ channel->params.high = SIZE_MAX;
+ channel->params.low = dispatch_io_defaults.low_water_chunks *
+ dispatch_io_defaults.chunk_pages * PAGE_SIZE;
+ channel->queue = dispatch_queue_create("com.apple.libdispatch-io.channelq",
+ NULL);
+ return channel;
+}
+
+static void
+_dispatch_io_init(dispatch_io_t channel, dispatch_fd_entry_t fd_entry,
+ dispatch_queue_t queue, int err, void (^cleanup_handler)(int))
+{
+ // Enqueue the cleanup handler on the suspended close queue
+ if (cleanup_handler) {
+ _dispatch_retain(queue);
+ dispatch_async(!err ? fd_entry->close_queue : channel->queue, ^{
+ dispatch_async(queue, ^{
+ _dispatch_io_debug("cleanup handler invoke", -1);
+ cleanup_handler(err);
+ });
+ _dispatch_release(queue);
+ });
+ }
+ if (fd_entry) {
+ channel->fd_entry = fd_entry;
+ dispatch_retain(fd_entry->barrier_queue);
+ dispatch_retain(fd_entry->barrier_group);
+ channel->barrier_queue = fd_entry->barrier_queue;
+ channel->barrier_group = fd_entry->barrier_group;
+ } else {
+ // Still need to create a barrier queue, since all operations go
+ // through it
+ channel->barrier_queue = dispatch_queue_create(
+ "com.apple.libdispatch-io.barrierq", NULL);
+ channel->barrier_group = dispatch_group_create();
+ }
+}
+
+static void
+_dispatch_io_dispose(dispatch_io_t channel)
+{
+ if (channel->fd_entry) {
+ if (channel->fd_entry->path_data) {
+ // This modification is safe since path_data->channel is checked
+ // only on close_queue (which is still suspended at this point)
+ channel->fd_entry->path_data->channel = NULL;
+ }
+ // Cleanup handlers will only run when all channels related to this
+ // fd are complete
+ _dispatch_fd_entry_release(channel->fd_entry);
+ }
+ if (channel->queue) {
+ dispatch_release(channel->queue);
+ }
+ if (channel->barrier_queue) {
+ dispatch_release(channel->barrier_queue);
+ }
+ if (channel->barrier_group) {
+ dispatch_release(channel->barrier_group);
+ }
+ _dispatch_dispose(channel);
+}
+
+static int
+_dispatch_io_validate_type(dispatch_io_t channel, mode_t mode)
+{
+ int err = 0;
+ if (S_ISDIR(mode)) {
+ err = EISDIR;
+ } else if (channel->params.type == DISPATCH_IO_RANDOM &&
+ (S_ISFIFO(mode) || S_ISSOCK(mode))) {
+ err = ESPIPE;
+ }
+ return err;
+}
+
+static int
+_dispatch_io_get_error(dispatch_operation_t op, dispatch_io_t channel,
+ bool ignore_closed)
+{
+ // On _any_ queue
+ int err;
+ if (op) {
+ channel = op->channel;
+ }
+ if (channel->atomic_flags & (DIO_CLOSED|DIO_STOPPED)) {
+ if (!ignore_closed || channel->atomic_flags & DIO_STOPPED) {
+ err = ECANCELED;
+ } else {
+ err = 0;
+ }
+ } else {
+ err = op ? op->fd_entry->err : channel->err;
+ }
+ return err;
+}
+
+#pragma mark -
+#pragma mark dispatch_io_channels
+
+dispatch_io_t
+dispatch_io_create(dispatch_io_type_t type, dispatch_fd_t fd,
+ dispatch_queue_t queue, void (^cleanup_handler)(int))
+{
+ if (type != DISPATCH_IO_STREAM && type != DISPATCH_IO_RANDOM) {
+ return NULL;
+ }
+ _dispatch_io_debug("io create", fd);
+ dispatch_io_t channel = _dispatch_io_create(type);
+ channel->fd = fd;
+ channel->fd_actual = fd;
+ dispatch_suspend(channel->queue);
+ _dispatch_retain(queue);
+ _dispatch_retain(channel);
+ _dispatch_fd_entry_init_async(fd, ^(dispatch_fd_entry_t fd_entry) {
+ // On barrier queue
+ int err = fd_entry->err;
+ if (!err) {
+ err = _dispatch_io_validate_type(channel, fd_entry->stat.mode);
+ }
+ if (!err && type == DISPATCH_IO_RANDOM) {
+ off_t f_ptr;
+ _dispatch_io_syscall_switch_noerr(err,
+ f_ptr = lseek(fd_entry->fd, 0, SEEK_CUR),
+ case 0: channel->f_ptr = f_ptr; break;
+ default: (void)dispatch_assume_zero(err); break;
+ );
+ }
+ channel->err = err;
+ _dispatch_fd_entry_retain(fd_entry);
+ _dispatch_io_init(channel, fd_entry, queue, err, cleanup_handler);
+ dispatch_resume(channel->queue);
+ _dispatch_release(channel);
+ _dispatch_release(queue);
+ });
+ return channel;
+}
+
+dispatch_io_t
+dispatch_io_create_with_path(dispatch_io_type_t type, const char *path,
+ int oflag, mode_t mode, dispatch_queue_t queue,
+ void (^cleanup_handler)(int error))
+{
+ if ((type != DISPATCH_IO_STREAM && type != DISPATCH_IO_RANDOM) ||
+ !(path && *path == '/')) {
+ return NULL;
+ }
+ size_t pathlen = strlen(path);
+ dispatch_io_path_data_t path_data = malloc(sizeof(*path_data) + pathlen+1);
+ if (!path_data) {
+ return NULL;
+ }
+ _dispatch_io_debug("io create with path %s", -1, path);
+ dispatch_io_t channel = _dispatch_io_create(type);
+ channel->fd = -1;
+ channel->fd_actual = -1;
+ path_data->channel = channel;
+ path_data->oflag = oflag;
+ path_data->mode = mode;
+ path_data->pathlen = pathlen;
+ memcpy(path_data->path, path, pathlen + 1);
+ _dispatch_retain(queue);
+ _dispatch_retain(channel);
+ dispatch_async(channel->queue, ^{
+ int err = 0;
+ struct stat st;
+ _dispatch_io_syscall_switch_noerr(err,
+ (path_data->oflag & O_NOFOLLOW) == O_NOFOLLOW ||
+ (path_data->oflag & O_SYMLINK) == O_SYMLINK ?
+ lstat(path_data->path, &st) : stat(path_data->path, &st),
+ case 0:
+ err = _dispatch_io_validate_type(channel, st.st_mode);
+ break;
+ default:
+ if ((path_data->oflag & O_CREAT) &&
+ (*(path_data->path + path_data->pathlen - 1) != '/')) {
+ // Check parent directory
+ char *c = strrchr(path_data->path, '/');
+ dispatch_assert(c);
+ *c = 0;
+ int perr;
+ _dispatch_io_syscall_switch_noerr(perr,
+ stat(path_data->path, &st),
+ case 0:
+ // Since the parent directory exists, open() will
+ // create a regular file after the fd_entry has
+ // been filled in
+ st.st_mode = S_IFREG;
+ err = 0;
+ break;
+ );
+ *c = '/';
+ }
+ break;
+ );
+ channel->err = err;
+ if (err) {
+ free(path_data);
+ _dispatch_io_init(channel, NULL, queue, err, cleanup_handler);
+ _dispatch_release(channel);
+ _dispatch_release(queue);
+ return;
+ }
+ dispatch_suspend(channel->queue);
+ dispatch_once_f(&_dispatch_io_devs_lockq_pred, NULL,
+ _dispatch_io_devs_lockq_init);
+ dispatch_async(_dispatch_io_devs_lockq, ^{
+ dispatch_fd_entry_t fd_entry = _dispatch_fd_entry_create_with_path(
+ path_data, st.st_dev, st.st_mode);
+ _dispatch_io_init(channel, fd_entry, queue, 0, cleanup_handler);
+ dispatch_resume(channel->queue);
+ _dispatch_release(channel);
+ _dispatch_release(queue);
+ });
+ });
+ return channel;
+}
+
+dispatch_io_t
+dispatch_io_create_with_io(dispatch_io_type_t type, dispatch_io_t in_channel,
+ dispatch_queue_t queue, void (^cleanup_handler)(int error))
+{
+ if (type != DISPATCH_IO_STREAM && type != DISPATCH_IO_RANDOM) {
+ return NULL;
+ }
+ _dispatch_io_debug("io create with io %p", -1, in_channel);
+ dispatch_io_t channel = _dispatch_io_create(type);
+ dispatch_suspend(channel->queue);
+ _dispatch_retain(queue);
+ _dispatch_retain(channel);
+ _dispatch_retain(in_channel);
+ dispatch_async(in_channel->queue, ^{
+ int err0 = _dispatch_io_get_error(NULL, in_channel, false);
+ if (err0) {
+ channel->err = err0;
+ _dispatch_io_init(channel, NULL, queue, err0, cleanup_handler);
+ dispatch_resume(channel->queue);
+ _dispatch_release(channel);
+ _dispatch_release(in_channel);
+ _dispatch_release(queue);
+ return;
+ }
+ dispatch_async(in_channel->barrier_queue, ^{
+ int err = _dispatch_io_get_error(NULL, in_channel, false);
+ // If there is no error, the fd_entry for the in_channel is valid.
+ // Since we are running on in_channel's queue, the fd_entry has been
+ // fully resolved and will stay valid for the duration of this block
+ if (!err) {
+ err = in_channel->err;
+ if (!err) {
+ err = in_channel->fd_entry->err;
+ }
+ }
+ if (!err) {
+ err = _dispatch_io_validate_type(channel,
+ in_channel->fd_entry->stat.mode);
+ }
+ if (!err && type == DISPATCH_IO_RANDOM && in_channel->fd != -1) {
+ off_t f_ptr;
+ _dispatch_io_syscall_switch_noerr(err,
+ f_ptr = lseek(in_channel->fd_entry->fd, 0, SEEK_CUR),
+ case 0: channel->f_ptr = f_ptr; break;
+ default: (void)dispatch_assume_zero(err); break;
+ );
+ }
+ channel->err = err;
+ if (err) {
+ _dispatch_io_init(channel, NULL, queue, err, cleanup_handler);
+ dispatch_resume(channel->queue);
+ _dispatch_release(channel);
+ _dispatch_release(in_channel);
+ _dispatch_release(queue);
+ return;
+ }
+ if (in_channel->fd == -1) {
+ // in_channel was created from path
+ channel->fd = -1;
+ channel->fd_actual = -1;
+ mode_t mode = in_channel->fd_entry->stat.mode;
+ dev_t dev = in_channel->fd_entry->stat.dev;
+ size_t path_data_len = sizeof(struct dispatch_io_path_data_s) +
+ in_channel->fd_entry->path_data->pathlen + 1;
+ dispatch_io_path_data_t path_data = malloc(path_data_len);
+ memcpy(path_data, in_channel->fd_entry->path_data,
+ path_data_len);
+ path_data->channel = channel;
+ // lockq_io_devs is known to already exist
+ dispatch_async(_dispatch_io_devs_lockq, ^{
+ dispatch_fd_entry_t fd_entry;
+ fd_entry = _dispatch_fd_entry_create_with_path(path_data,
+ dev, mode);
+ _dispatch_io_init(channel, fd_entry, queue, 0,
+ cleanup_handler);
+ dispatch_resume(channel->queue);
+ _dispatch_release(channel);
+ _dispatch_release(queue);
+ });
+ } else {
+ dispatch_fd_entry_t fd_entry = in_channel->fd_entry;
+ channel->fd = in_channel->fd;
+ channel->fd_actual = in_channel->fd_actual;
+ _dispatch_fd_entry_retain(fd_entry);
+ _dispatch_io_init(channel, fd_entry, queue, 0, cleanup_handler);
+ dispatch_resume(channel->queue);
+ _dispatch_release(channel);
+ _dispatch_release(queue);
+ }
+ _dispatch_release(in_channel);
+ });
+ });
+ return channel;
+}
+
+#pragma mark -
+#pragma mark dispatch_io_accessors
+
+void
+dispatch_io_set_high_water(dispatch_io_t channel, size_t high_water)
+{
+ _dispatch_retain(channel);
+ dispatch_async(channel->queue, ^{
+ _dispatch_io_debug("io set high water", channel->fd);
+ if (channel->params.low > high_water) {
+ channel->params.low = high_water;
+ }
+ channel->params.high = high_water ? high_water : 1;
+ _dispatch_release(channel);
+ });
+}
+
+void
+dispatch_io_set_low_water(dispatch_io_t channel, size_t low_water)
+{
+ _dispatch_retain(channel);
+ dispatch_async(channel->queue, ^{
+ _dispatch_io_debug("io set low water", channel->fd);
+ if (channel->params.high < low_water) {
+ channel->params.high = low_water ? low_water : 1;
+ }
+ channel->params.low = low_water;
+ _dispatch_release(channel);
+ });
+}
+
+void
+dispatch_io_set_interval(dispatch_io_t channel, uint64_t interval,
+ unsigned long flags)
+{
+ _dispatch_retain(channel);
+ dispatch_async(channel->queue, ^{
+ _dispatch_io_debug("io set interval", channel->fd);
+ channel->params.interval = interval;
+ channel->params.interval_flags = flags;
+ _dispatch_release(channel);
+ });
+}
+
+void
+_dispatch_io_set_target_queue(dispatch_io_t channel, dispatch_queue_t dq)
+{
+ _dispatch_retain(dq);
+ _dispatch_retain(channel);
+ dispatch_async(channel->queue, ^{
+ dispatch_queue_t prev_dq = channel->do_targetq;
+ channel->do_targetq = dq;
+ _dispatch_release(prev_dq);
+ _dispatch_release(channel);
+ });
+}
+
+dispatch_fd_t
+dispatch_io_get_descriptor(dispatch_io_t channel)
+{
+ if (channel->atomic_flags & (DIO_CLOSED|DIO_STOPPED)) {
+ return -1;
+ }
+ dispatch_fd_t fd = channel->fd_actual;
+ if (fd == -1 &&
+ _dispatch_thread_getspecific(dispatch_io_key) == channel) {
+ dispatch_fd_entry_t fd_entry = channel->fd_entry;
+ (void)_dispatch_fd_entry_open(fd_entry, channel);
+ }
+ return channel->fd_actual;
+}
+
+#pragma mark -
+#pragma mark dispatch_io_operations
+
+static void
+_dispatch_io_stop(dispatch_io_t channel)
+{
+ _dispatch_io_debug("io stop", channel->fd);
+ (void)dispatch_atomic_or2o(channel, atomic_flags, DIO_STOPPED);
+ _dispatch_retain(channel);
+ dispatch_async(channel->queue, ^{
+ dispatch_async(channel->barrier_queue, ^{
+ dispatch_fd_entry_t fd_entry = channel->fd_entry;
+ if (fd_entry) {
+ _dispatch_io_debug("io stop cleanup", channel->fd);
+ _dispatch_fd_entry_cleanup_operations(fd_entry, channel);
+ channel->fd_entry = NULL;
+ _dispatch_fd_entry_release(fd_entry);
+ } else if (channel->fd != -1) {
+ // Stop after close, need to check if fd_entry still exists
+ _dispatch_retain(channel);
+ dispatch_async(_dispatch_io_fds_lockq, ^{
+ _dispatch_io_debug("io stop after close cleanup",
+ channel->fd);
+ dispatch_fd_entry_t fdi;
+ uintptr_t hash = DIO_HASH(channel->fd);
+ TAILQ_FOREACH(fdi, &_dispatch_io_fds[hash], fd_list) {
+ if (fdi->fd == channel->fd) {
+ _dispatch_fd_entry_cleanup_operations(fdi, channel);
+ break;
+ }
+ }
+ _dispatch_release(channel);
+ });
+ }
+ _dispatch_release(channel);
+ });
+ });
+}
+
+void
+dispatch_io_close(dispatch_io_t channel, unsigned long flags)
+{
+ if (flags & DISPATCH_IO_STOP) {
+ // Don't stop an already stopped channel
+ if (channel->atomic_flags & DIO_STOPPED) {
+ return;
+ }
+ return _dispatch_io_stop(channel);
+ }
+ // Don't close an already closed or stopped channel
+ if (channel->atomic_flags & (DIO_CLOSED|DIO_STOPPED)) {
+ return;
+ }
+ _dispatch_retain(channel);
+ dispatch_async(channel->queue, ^{
+ dispatch_async(channel->barrier_queue, ^{
+ _dispatch_io_debug("io close", channel->fd);
+ (void)dispatch_atomic_or2o(channel, atomic_flags, DIO_CLOSED);
+ dispatch_fd_entry_t fd_entry = channel->fd_entry;
+ if (fd_entry) {
+ if (!fd_entry->path_data) {
+ channel->fd_entry = NULL;
+ }
+ _dispatch_fd_entry_release(fd_entry);
+ }
+ _dispatch_release(channel);
+ });
+ });
+}
+
+void
+dispatch_io_barrier(dispatch_io_t channel, dispatch_block_t barrier)
+{
+ _dispatch_retain(channel);
+ dispatch_async(channel->queue, ^{
+ dispatch_queue_t io_q = channel->do_targetq;
+ dispatch_queue_t barrier_queue = channel->barrier_queue;
+ dispatch_group_t barrier_group = channel->barrier_group;
+ dispatch_async(barrier_queue, ^{
+ dispatch_suspend(barrier_queue);
+ dispatch_group_notify(barrier_group, io_q, ^{
+ _dispatch_thread_setspecific(dispatch_io_key, channel);
+ barrier();
+ _dispatch_thread_setspecific(dispatch_io_key, NULL);
+ dispatch_resume(barrier_queue);
+ _dispatch_release(channel);
+ });
+ });
+ });
+}
+
+void
+dispatch_io_read(dispatch_io_t channel, off_t offset, size_t length,
+ dispatch_queue_t queue, dispatch_io_handler_t handler)
+{
+ _dispatch_retain(channel);
+ _dispatch_retain(queue);
+ dispatch_async(channel->queue, ^{
+ dispatch_operation_t op;
+ op = _dispatch_operation_create(DOP_DIR_READ, channel, offset,
+ length, dispatch_data_empty, queue, handler);
+ if (op) {
+ dispatch_queue_t barrier_q = channel->barrier_queue;
+ dispatch_async(barrier_q, ^{
+ _dispatch_operation_enqueue(op, DOP_DIR_READ,
+ dispatch_data_empty);
+ });
+ }
+ _dispatch_release(channel);
+ _dispatch_release(queue);
+ });
+}
+
+void
+dispatch_io_write(dispatch_io_t channel, off_t offset, dispatch_data_t data,
+ dispatch_queue_t queue, dispatch_io_handler_t handler)
+{
+ _dispatch_io_data_retain(data);
+ _dispatch_retain(channel);
+ _dispatch_retain(queue);
+ dispatch_async(channel->queue, ^{
+ dispatch_operation_t op;
+ op = _dispatch_operation_create(DOP_DIR_WRITE, channel, offset,
+ dispatch_data_get_size(data), data, queue, handler);
+ if (op) {
+ dispatch_queue_t barrier_q = channel->barrier_queue;
+ dispatch_async(barrier_q, ^{
+ _dispatch_operation_enqueue(op, DOP_DIR_WRITE, data);
+ _dispatch_io_data_release(data);
+ });
+ } else {
+ _dispatch_io_data_release(data);
+ }
+ _dispatch_release(channel);
+ _dispatch_release(queue);
+ });
+}
+
+void
+dispatch_read(dispatch_fd_t fd, size_t length, dispatch_queue_t queue,
+ void (^handler)(dispatch_data_t, int))
+{
+ _dispatch_retain(queue);
+ _dispatch_fd_entry_init_async(fd, ^(dispatch_fd_entry_t fd_entry) {
+ // On barrier queue
+ if (fd_entry->err) {
+ int err = fd_entry->err;
+ dispatch_async(queue, ^{
+ _dispatch_io_debug("convenience handler invoke", fd);
+ handler(dispatch_data_empty, err);
+ });
+ _dispatch_release(queue);
+ return;
+ }
+ // Safe to access fd_entry on barrier queue
+ dispatch_io_t channel = fd_entry->convenience_channel;
+ if (!channel) {
+ channel = _dispatch_io_create(DISPATCH_IO_STREAM);
+ channel->fd = fd;
+ channel->fd_actual = fd;
+ channel->fd_entry = fd_entry;
+ dispatch_retain(fd_entry->barrier_queue);
+ dispatch_retain(fd_entry->barrier_group);
+ channel->barrier_queue = fd_entry->barrier_queue;
+ channel->barrier_group = fd_entry->barrier_group;
+ fd_entry->convenience_channel = channel;
+ }
+ __block dispatch_data_t deliver_data = dispatch_data_empty;
+ __block int err = 0;
+ dispatch_async(fd_entry->close_queue, ^{
+ dispatch_async(queue, ^{
+ _dispatch_io_debug("convenience handler invoke", fd);
+ handler(deliver_data, err);
+ _dispatch_io_data_release(deliver_data);
+ });
+ _dispatch_release(queue);
+ });
+ dispatch_operation_t op =
+ _dispatch_operation_create(DOP_DIR_READ, channel, 0,
+ length, dispatch_data_empty,
+ _dispatch_get_root_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,
+ false), ^(bool done, dispatch_data_t data, int error) {
+ if (data) {
+ data = dispatch_data_create_concat(deliver_data, data);
+ _dispatch_io_data_release(deliver_data);
+ deliver_data = data;
+ }
+ if (done) {
+ err = error;
+ }
+ });
+ if (op) {
+ _dispatch_operation_enqueue(op, DOP_DIR_READ, dispatch_data_empty);
+ }
+ });
+}
+
+void
+dispatch_write(dispatch_fd_t fd, dispatch_data_t data, dispatch_queue_t queue,
+ void (^handler)(dispatch_data_t, int))
+{
+ _dispatch_io_data_retain(data);
+ _dispatch_retain(queue);
+ _dispatch_fd_entry_init_async(fd, ^(dispatch_fd_entry_t fd_entry) {
+ // On barrier queue
+ if (fd_entry->err) {
+ int err = fd_entry->err;
+ dispatch_async(queue, ^{
+ _dispatch_io_debug("convenience handler invoke", fd);
+ handler(NULL, err);
+ });
+ _dispatch_release(queue);
+ return;
+ }
+ // Safe to access fd_entry on barrier queue
+ dispatch_io_t channel = fd_entry->convenience_channel;
+ if (!channel) {
+ channel = _dispatch_io_create(DISPATCH_IO_STREAM);
+ channel->fd = fd;
+ channel->fd_actual = fd;
+ channel->fd_entry = fd_entry;
+ dispatch_retain(fd_entry->barrier_queue);
+ dispatch_retain(fd_entry->barrier_group);
+ channel->barrier_queue = fd_entry->barrier_queue;
+ channel->barrier_group = fd_entry->barrier_group;
+ fd_entry->convenience_channel = channel;
+ }
+ __block dispatch_data_t deliver_data = NULL;
+ __block int err = 0;
+ dispatch_async(fd_entry->close_queue, ^{
+ dispatch_async(queue, ^{
+ _dispatch_io_debug("convenience handler invoke", fd);
+ handler(deliver_data, err);
+ if (deliver_data) {
+ _dispatch_io_data_release(deliver_data);
+ }
+ });
+ _dispatch_release(queue);
+ });
+ dispatch_operation_t op =
+ _dispatch_operation_create(DOP_DIR_WRITE, channel, 0,
+ dispatch_data_get_size(data), data,
+ _dispatch_get_root_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,
+ false), ^(bool done, dispatch_data_t d, int error) {
+ if (done) {
+ if (d) {
+ _dispatch_io_data_retain(d);
+ deliver_data = d;
+ }
+ err = error;
+ }
+ });
+ if (op) {
+ _dispatch_operation_enqueue(op, DOP_DIR_WRITE, data);
+ }
+ _dispatch_io_data_release(data);
+ });
+}
+
+#pragma mark -
+#pragma mark dispatch_operation_t
+
+static dispatch_operation_t
+_dispatch_operation_create(dispatch_op_direction_t direction,
+ dispatch_io_t channel, off_t offset, size_t length,
+ dispatch_data_t data, dispatch_queue_t queue,
+ dispatch_io_handler_t handler)
+{
+ // On channel queue
+ dispatch_assert(direction < DOP_DIR_MAX);
+ _dispatch_io_debug("operation create", channel->fd);
+#if DISPATCH_IO_DEBUG
+ int fd = channel->fd;
+#endif
+ // Safe to call _dispatch_io_get_error() with channel->fd_entry since
+ // that can only be NULL if atomic_flags are set rdar://problem/8362514
+ int err = _dispatch_io_get_error(NULL, channel, false);
+ if (err || !length) {
+ _dispatch_io_data_retain(data);
+ _dispatch_retain(queue);
+ dispatch_async(channel->barrier_queue, ^{
+ dispatch_async(queue, ^{
+ dispatch_data_t d = data;
+ if (direction == DOP_DIR_READ && err) {
+ d = NULL;
+ } else if (direction == DOP_DIR_WRITE && !err) {
+ d = NULL;
+ }
+ _dispatch_io_debug("IO handler invoke", fd);
+ handler(true, d, err);
+ _dispatch_io_data_release(data);
+ });
+ _dispatch_release(queue);
+ });
+ return NULL;
+ }
+ dispatch_operation_t op;
+ op = calloc(1ul, sizeof(struct dispatch_operation_s));
+ op->do_vtable = &_dispatch_operation_vtable;
+ op->do_next = DISPATCH_OBJECT_LISTLESS;
+ op->do_ref_cnt = 1;
+ op->do_xref_cnt = 0; // operation object is not exposed externally
+ op->op_q = dispatch_queue_create("com.apple.libdispatch-io.opq", NULL);
+ op->op_q->do_targetq = queue;
+ _dispatch_retain(queue);
+ op->active = false;
+ op->direction = direction;
+ op->offset = offset + channel->f_ptr;
+ op->length = length;
+ op->handler = Block_copy(handler);
+ _dispatch_retain(channel);
+ op->channel = channel;
+ op->params = channel->params;
+ // Take a snapshot of the priority of the channel queue. The actual I/O
+ // for this operation will be performed at this priority
+ dispatch_queue_t targetq = op->channel->do_targetq;
+ while (fastpath(targetq->do_targetq)) {
+ targetq = targetq->do_targetq;
+ }
+ op->do_targetq = targetq;
+ return op;
+}
+
+static void
+_dispatch_operation_dispose(dispatch_operation_t op)
+{
+ // Deliver the data if there's any
+ if (op->fd_entry) {
+ _dispatch_operation_deliver_data(op, DOP_DONE);
+ dispatch_group_leave(op->fd_entry->barrier_group);
+ _dispatch_fd_entry_release(op->fd_entry);
+ }
+ if (op->channel) {
+ _dispatch_release(op->channel);
+ }
+ if (op->timer) {
+ dispatch_release(op->timer);
+ }
+ // For write operations, op->buf is owned by op->buf_data
+ if (op->buf && op->direction == DOP_DIR_READ) {
+ free(op->buf);
+ }
+ if (op->buf_data) {
+ _dispatch_io_data_release(op->buf_data);
+ }
+ if (op->data) {
+ _dispatch_io_data_release(op->data);
+ }
+ if (op->op_q) {
+ dispatch_release(op->op_q);
+ }
+ Block_release(op->handler);
+ _dispatch_dispose(op);
+}
+
+static void
+_dispatch_operation_enqueue(dispatch_operation_t op,
+ dispatch_op_direction_t direction, dispatch_data_t data)
+{
+ // Called from the barrier queue
+ _dispatch_io_data_retain(data);
+ // If channel is closed or stopped, then call the handler immediately
+ int err = _dispatch_io_get_error(NULL, op->channel, false);
+ if (err) {
+ dispatch_io_handler_t handler = op->handler;
+ dispatch_async(op->op_q, ^{
+ dispatch_data_t d = data;
+ if (direction == DOP_DIR_READ && err) {
+ d = NULL;
+ } else if (direction == DOP_DIR_WRITE && !err) {
+ d = NULL;
+ }
+ handler(true, d, err);
+ _dispatch_io_data_release(data);
+ });
+ _dispatch_release(op);
+ return;
+ }
+ // Finish operation init
+ op->fd_entry = op->channel->fd_entry;
+ _dispatch_fd_entry_retain(op->fd_entry);
+ dispatch_group_enter(op->fd_entry->barrier_group);
+ dispatch_disk_t disk = op->fd_entry->disk;
+ if (!disk) {
+ dispatch_stream_t stream = op->fd_entry->streams[direction];
+ dispatch_async(stream->dq, ^{
+ _dispatch_stream_enqueue_operation(stream, op, data);
+ _dispatch_io_data_release(data);
+ });
+ } else {
+ dispatch_async(disk->pick_queue, ^{
+ _dispatch_disk_enqueue_operation(disk, op, data);
+ _dispatch_io_data_release(data);
+ });
+ }
+}
+
+static bool
+_dispatch_operation_should_enqueue(dispatch_operation_t op,
+ dispatch_queue_t tq, dispatch_data_t data)
+{
+ // On stream queue or disk queue
+ _dispatch_io_debug("enqueue operation", op->fd_entry->fd);
+ _dispatch_io_data_retain(data);
+ op->data = data;
+ int err = _dispatch_io_get_error(op, NULL, true);
+ if (err) {
+ op->err = err;
+ // Final release
+ _dispatch_release(op);
+ return false;
+ }
+ if (op->params.interval) {
+ dispatch_resume(_dispatch_operation_timer(tq, op));
+ }
+ return true;
+}
+
+static dispatch_source_t
+_dispatch_operation_timer(dispatch_queue_t tq, dispatch_operation_t op)
+{
+ // On stream queue or pick queue
+ if (op->timer) {
+ return op->timer;
+ }
+ dispatch_source_t timer = dispatch_source_create(
+ DISPATCH_SOURCE_TYPE_TIMER, 0, 0, tq);
+ dispatch_source_set_timer(timer, dispatch_time(DISPATCH_TIME_NOW,
+ op->params.interval), op->params.interval, 0);
+ dispatch_source_set_event_handler(timer, ^{
+ // On stream queue or pick queue
+ if (dispatch_source_testcancel(timer)) {
+ // Do nothing. The operation has already completed
+ return;
+ }
+ dispatch_op_flags_t flags = DOP_DEFAULT;
+ if (op->params.interval_flags & DISPATCH_IO_STRICT_INTERVAL) {
+ // Deliver even if there is less data than the low-water mark
+ flags |= DOP_DELIVER;
+ }
+ // If the operation is active, dont deliver data
+ if ((op->active) && (flags & DOP_DELIVER)) {
+ op->flags = flags;
+ } else {
+ _dispatch_operation_deliver_data(op, flags);
+ }
+ });
+ op->timer = timer;
+ return op->timer;
+}
+
+#pragma mark -
+#pragma mark dispatch_fd_entry_t
+
+static inline void
+_dispatch_fd_entry_retain(dispatch_fd_entry_t fd_entry) {
+ dispatch_suspend(fd_entry->close_queue);
+}
+
+static inline void
+_dispatch_fd_entry_release(dispatch_fd_entry_t fd_entry) {
+ dispatch_resume(fd_entry->close_queue);
+}
+
+static void
+_dispatch_fd_entry_init_async(dispatch_fd_t fd,
+ dispatch_fd_entry_init_callback_t completion_callback)
+{
+ static dispatch_once_t _dispatch_io_fds_lockq_pred;
+ dispatch_once_f(&_dispatch_io_fds_lockq_pred, NULL,
+ _dispatch_io_fds_lockq_init);
+ dispatch_async(_dispatch_io_fds_lockq, ^{
+ _dispatch_io_debug("fd entry init", fd);
+ dispatch_fd_entry_t fd_entry = NULL;
+ // Check to see if there is an existing entry for the given fd
+ uintptr_t hash = DIO_HASH(fd);
+ TAILQ_FOREACH(fd_entry, &_dispatch_io_fds[hash], fd_list) {
+ if (fd_entry->fd == fd) {
+ // Retain the fd_entry to ensure it cannot go away until the
+ // stat() has completed
+ _dispatch_fd_entry_retain(fd_entry);
+ break;
+ }
+ }
+ if (!fd_entry) {
+ // If we did not find an existing entry, create one
+ fd_entry = _dispatch_fd_entry_create_with_fd(fd, hash);
+ }
+ dispatch_async(fd_entry->barrier_queue, ^{
+ _dispatch_io_debug("fd entry init completion", fd);
+ completion_callback(fd_entry);
+ // stat() is complete, release reference to fd_entry
+ _dispatch_fd_entry_release(fd_entry);
+ });
+ });
+}
+
+static dispatch_fd_entry_t
+_dispatch_fd_entry_create(dispatch_queue_t q)
+{
+ dispatch_fd_entry_t fd_entry;
+ fd_entry = calloc(1ul, sizeof(struct dispatch_fd_entry_s));
+ fd_entry->close_queue = dispatch_queue_create(
+ "com.apple.libdispatch-io.closeq", NULL);
+ // Use target queue to ensure that no concurrent lookups are going on when
+ // the close queue is running
+ fd_entry->close_queue->do_targetq = q;
+ _dispatch_retain(q);
+ // Suspend the cleanup queue until closing
+ _dispatch_fd_entry_retain(fd_entry);
+ return fd_entry;
+}
+
+static dispatch_fd_entry_t
+_dispatch_fd_entry_create_with_fd(dispatch_fd_t fd, uintptr_t hash)
+{
+ // On fds lock queue
+ _dispatch_io_debug("fd entry create", fd);
+ dispatch_fd_entry_t fd_entry = _dispatch_fd_entry_create(
+ _dispatch_io_fds_lockq);
+ fd_entry->fd = fd;
+ TAILQ_INSERT_TAIL(&_dispatch_io_fds[hash], fd_entry, fd_list);
+ fd_entry->barrier_queue = dispatch_queue_create(
+ "com.apple.libdispatch-io.barrierq", NULL);
+ fd_entry->barrier_group = dispatch_group_create();
+ dispatch_async(fd_entry->barrier_queue, ^{
+ _dispatch_io_debug("fd entry stat", fd);
+ int err, orig_flags, orig_nosigpipe = -1;
+ struct stat st;
+ _dispatch_io_syscall_switch(err,
+ fstat(fd, &st),
+ default: fd_entry->err = err; return;
+ );
+ fd_entry->stat.dev = st.st_dev;
+ fd_entry->stat.mode = st.st_mode;
+ _dispatch_io_syscall_switch(err,
+ orig_flags = fcntl(fd, F_GETFL),
+ default: (void)dispatch_assume_zero(err); break;
+ );
+#if DISPATCH_USE_SETNOSIGPIPE // rdar://problem/4121123
+ if (S_ISFIFO(st.st_mode)) {
+ _dispatch_io_syscall_switch(err,
+ orig_nosigpipe = fcntl(fd, F_GETNOSIGPIPE),
+ default: (void)dispatch_assume_zero(err); break;
+ );
+ if (orig_nosigpipe != -1) {
+ _dispatch_io_syscall_switch(err,
+ orig_nosigpipe = fcntl(fd, F_SETNOSIGPIPE, 1),
+ default:
+ orig_nosigpipe = -1;
+ (void)dispatch_assume_zero(err);
+ break;
+ );
+ }
+ }
+#endif
+ if (S_ISREG(st.st_mode)) {
+ if (orig_flags != -1) {
+ _dispatch_io_syscall_switch(err,
+ fcntl(fd, F_SETFL, orig_flags & ~O_NONBLOCK),
+ default:
+ orig_flags = -1;
+ (void)dispatch_assume_zero(err);
+ break;
+ );
+ }
+ int32_t dev = major(st.st_dev);
+ // We have to get the disk on the global dev queue. The
+ // barrier queue cannot continue until that is complete
+ dispatch_suspend(fd_entry->barrier_queue);
+ dispatch_once_f(&_dispatch_io_devs_lockq_pred, NULL,
+ _dispatch_io_devs_lockq_init);
+ dispatch_async(_dispatch_io_devs_lockq, ^{
+ _dispatch_disk_init(fd_entry, dev);
+ dispatch_resume(fd_entry->barrier_queue);
+ });
+ } else {
+ if (orig_flags != -1) {
+ _dispatch_io_syscall_switch(err,
+ fcntl(fd, F_SETFL, orig_flags | O_NONBLOCK),
+ default:
+ orig_flags = -1;
+ (void)dispatch_assume_zero(err);
+ break;
+ );
+ }
+ _dispatch_stream_init(fd_entry, _dispatch_get_root_queue(
+ DISPATCH_QUEUE_PRIORITY_DEFAULT, false));
+ }
+ fd_entry->orig_flags = orig_flags;
+ fd_entry->orig_nosigpipe = orig_nosigpipe;
+ });
+ // This is the first item run when the close queue is resumed, indicating
+ // that all channels associated with this entry have been closed and that
+ // all operations associated with this entry have been freed
+ dispatch_async(fd_entry->close_queue, ^{
+ if (!fd_entry->disk) {
+ _dispatch_io_debug("close queue fd_entry cleanup", fd);
+ dispatch_op_direction_t dir;
+ for (dir = 0; dir < DOP_DIR_MAX; dir++) {
+ _dispatch_stream_dispose(fd_entry, dir);
+ }
+ } else {
+ dispatch_disk_t disk = fd_entry->disk;
+ dispatch_async(_dispatch_io_devs_lockq, ^{
+ _dispatch_release(disk);
+ });
+ }
+ // Remove this entry from the global fd list
+ TAILQ_REMOVE(&_dispatch_io_fds[hash], fd_entry, fd_list);
+ });
+ // If there was a source associated with this stream, disposing of the
+ // source cancels it and suspends the close queue. Freeing the fd_entry
+ // structure must happen after the source cancel handler has finished
+ dispatch_async(fd_entry->close_queue, ^{
+ _dispatch_io_debug("close queue release", fd);
+ dispatch_release(fd_entry->close_queue);
+ _dispatch_io_debug("barrier queue release", fd);
+ dispatch_release(fd_entry->barrier_queue);
+ _dispatch_io_debug("barrier group release", fd);
+ dispatch_release(fd_entry->barrier_group);
+ if (fd_entry->orig_flags != -1) {
+ _dispatch_io_syscall(
+ fcntl(fd, F_SETFL, fd_entry->orig_flags)
+ );
+ }
+#if DISPATCH_USE_SETNOSIGPIPE // rdar://problem/4121123
+ if (fd_entry->orig_nosigpipe != -1) {
+ _dispatch_io_syscall(
+ fcntl(fd, F_SETNOSIGPIPE, fd_entry->orig_nosigpipe)
+ );
+ }
+#endif
+ if (fd_entry->convenience_channel) {
+ fd_entry->convenience_channel->fd_entry = NULL;
+ dispatch_release(fd_entry->convenience_channel);
+ }
+ free(fd_entry);
+ });
+ return fd_entry;
+}
+
+static dispatch_fd_entry_t
+_dispatch_fd_entry_create_with_path(dispatch_io_path_data_t path_data,
+ dev_t dev, mode_t mode)
+{
+ // On devs lock queue
+ _dispatch_io_debug("fd entry create with path %s", -1, path_data->path);
+ dispatch_fd_entry_t fd_entry = _dispatch_fd_entry_create(
+ path_data->channel->queue);
+ if (S_ISREG(mode)) {
+ _dispatch_disk_init(fd_entry, major(dev));
+ } else {
+ _dispatch_stream_init(fd_entry, _dispatch_get_root_queue(
+ DISPATCH_QUEUE_PRIORITY_DEFAULT, false));
+ }
+ fd_entry->fd = -1;
+ fd_entry->orig_flags = -1;
+ fd_entry->path_data = path_data;
+ fd_entry->stat.dev = dev;
+ fd_entry->stat.mode = mode;
+ fd_entry->barrier_queue = dispatch_queue_create(
+ "com.apple.libdispatch-io.barrierq", NULL);
+ fd_entry->barrier_group = dispatch_group_create();
+ // This is the first item run when the close queue is resumed, indicating
+ // that the channel associated with this entry has been closed and that
+ // all operations associated with this entry have been freed
+ dispatch_async(fd_entry->close_queue, ^{
+ _dispatch_io_debug("close queue fd_entry cleanup", -1);
+ if (!fd_entry->disk) {
+ dispatch_op_direction_t dir;
+ for (dir = 0; dir < DOP_DIR_MAX; dir++) {
+ _dispatch_stream_dispose(fd_entry, dir);
+ }
+ }
+ if (fd_entry->fd != -1) {
+ close(fd_entry->fd);
+ }
+ if (fd_entry->path_data->channel) {
+ // If associated channel has not been released yet, mark it as
+ // no longer having an fd_entry (for stop after close).
+ // It is safe to modify channel since we are on close_queue with
+ // target queue the channel queue
+ fd_entry->path_data->channel->fd_entry = NULL;
+ }
+ });
+ dispatch_async(fd_entry->close_queue, ^{
+ _dispatch_io_debug("close queue release", -1);
+ dispatch_release(fd_entry->close_queue);
+ dispatch_release(fd_entry->barrier_queue);
+ dispatch_release(fd_entry->barrier_group);
+ free(fd_entry->path_data);
+ free(fd_entry);
+ });
+ return fd_entry;
+}
+
+static int
+_dispatch_fd_entry_open(dispatch_fd_entry_t fd_entry, dispatch_io_t channel)
+{
+ if (!(fd_entry->fd == -1 && fd_entry->path_data)) {
+ return 0;
+ }
+ if (fd_entry->err) {
+ return fd_entry->err;
+ }
+ int fd = -1;
+ int oflag = fd_entry->disk ? fd_entry->path_data->oflag & ~O_NONBLOCK :
+ fd_entry->path_data->oflag | O_NONBLOCK;
+open:
+ fd = open(fd_entry->path_data->path, oflag, fd_entry->path_data->mode);
+ if (fd == -1) {
+ int err = errno;
+ if (err == EINTR) {
+ goto open;
+ }
+ (void)dispatch_atomic_cmpxchg2o(fd_entry, err, 0, err);
+ return err;
+ }
+ if (!dispatch_atomic_cmpxchg2o(fd_entry, fd, -1, fd)) {
+ // Lost the race with another open
+ close(fd);
+ } else {
+ channel->fd_actual = fd;
+ }
+ return 0;
+}
+
+static void
+_dispatch_fd_entry_cleanup_operations(dispatch_fd_entry_t fd_entry,
+ dispatch_io_t channel)
+{
+ if (fd_entry->disk) {
+ if (channel) {
+ _dispatch_retain(channel);
+ }
+ _dispatch_fd_entry_retain(fd_entry);
+ dispatch_async(fd_entry->disk->pick_queue, ^{
+ _dispatch_disk_cleanup_operations(fd_entry->disk, channel);
+ _dispatch_fd_entry_release(fd_entry);
+ if (channel) {
+ _dispatch_release(channel);
+ }
+ });
+ } else {
+ dispatch_op_direction_t direction;
+ for (direction = 0; direction < DOP_DIR_MAX; direction++) {
+ dispatch_stream_t stream = fd_entry->streams[direction];
+ if (!stream) {
+ continue;
+ }
+ if (channel) {
+ _dispatch_retain(channel);
+ }
+ _dispatch_fd_entry_retain(fd_entry);
+ dispatch_async(stream->dq, ^{
+ _dispatch_stream_cleanup_operations(stream, channel);
+ _dispatch_fd_entry_release(fd_entry);
+ if (channel) {
+ _dispatch_release(channel);
+ }
+ });
+ }
+ }
+}
+
+#pragma mark -
+#pragma mark dispatch_stream_t/dispatch_disk_t
+
+static void
+_dispatch_stream_init(dispatch_fd_entry_t fd_entry, dispatch_queue_t tq)
+{
+ dispatch_op_direction_t direction;
+ for (direction = 0; direction < DOP_DIR_MAX; direction++) {
+ dispatch_stream_t stream;
+ stream = calloc(1ul, sizeof(struct dispatch_stream_s));
+ stream->dq = dispatch_queue_create("com.apple.libdispatch-io.streamq",
+ NULL);
+ _dispatch_retain(tq);
+ stream->dq->do_targetq = tq;
+ TAILQ_INIT(&stream->operations[DISPATCH_IO_RANDOM]);
+ TAILQ_INIT(&stream->operations[DISPATCH_IO_STREAM]);
+ fd_entry->streams[direction] = stream;
+ }
+}
+
+static void
+_dispatch_stream_dispose(dispatch_fd_entry_t fd_entry,
+ dispatch_op_direction_t direction)
+{
+ // On close queue
+ dispatch_stream_t stream = fd_entry->streams[direction];
+ if (!stream) {
+ return;
+ }
+ dispatch_assert(TAILQ_EMPTY(&stream->operations[DISPATCH_IO_STREAM]));
+ dispatch_assert(TAILQ_EMPTY(&stream->operations[DISPATCH_IO_RANDOM]));
+ if (stream->source) {
+ // Balanced by source cancel handler:
+ _dispatch_fd_entry_retain(fd_entry);
+ dispatch_source_cancel(stream->source);
+ dispatch_resume(stream->source);
+ dispatch_release(stream->source);
+ }
+ dispatch_release(stream->dq);
+ free(stream);
+}
+
+static void
+_dispatch_disk_init(dispatch_fd_entry_t fd_entry, dev_t dev)
+{
+ // On devs lock queue
+ dispatch_disk_t disk;
+ char label_name[256];
+ // Check to see if there is an existing entry for the given device
+ uintptr_t hash = DIO_HASH(dev);
+ TAILQ_FOREACH(disk, &_dispatch_io_devs[hash], disk_list) {
+ if (disk->dev == dev) {
+ _dispatch_retain(disk);
+ goto out;
+ }
+ }
+ // Otherwise create a new entry
+ size_t pending_reqs_depth = dispatch_io_defaults.max_pending_io_reqs;
+ disk = calloc(1ul, sizeof(struct dispatch_disk_s) + (pending_reqs_depth *
+ sizeof(dispatch_operation_t)));
+ disk->do_vtable = &_dispatch_disk_vtable;
+ disk->do_next = DISPATCH_OBJECT_LISTLESS;
+ disk->do_ref_cnt = 1;
+ disk->do_xref_cnt = 0;
+ disk->advise_list_depth = pending_reqs_depth;
+ disk->do_targetq = _dispatch_get_root_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,
+ false);
+ disk->dev = dev;
+ TAILQ_INIT(&disk->operations);
+ disk->cur_rq = TAILQ_FIRST(&disk->operations);
+ sprintf(label_name, "com.apple.libdispatch-io.deviceq.%d", dev);
+ disk->pick_queue = dispatch_queue_create(label_name, NULL);
+ TAILQ_INSERT_TAIL(&_dispatch_io_devs[hash], disk, disk_list);
+out:
+ fd_entry->disk = disk;
+ TAILQ_INIT(&fd_entry->stream_ops);
+}
+
+static void
+_dispatch_disk_dispose(dispatch_disk_t disk)
+{
+ uintptr_t hash = DIO_HASH(disk->dev);
+ TAILQ_REMOVE(&_dispatch_io_devs[hash], disk, disk_list);
+ dispatch_assert(TAILQ_EMPTY(&disk->operations));
+ size_t i;
+ for (i=0; i<disk->advise_list_depth; ++i) {
+ dispatch_assert(!disk->advise_list[i]);
+ }
+ dispatch_release(disk->pick_queue);
+ free(disk);
+}
+
+#pragma mark -
+#pragma mark dispatch_stream_operations/dispatch_disk_operations
+
+static inline bool
+_dispatch_stream_operation_avail(dispatch_stream_t stream)
+{
+ return !(TAILQ_EMPTY(&stream->operations[DISPATCH_IO_RANDOM])) ||
+ !(TAILQ_EMPTY(&stream->operations[DISPATCH_IO_STREAM]));
+}
+
+static void
+_dispatch_stream_enqueue_operation(dispatch_stream_t stream,
+ dispatch_operation_t op, dispatch_data_t data)
+{
+ if (!_dispatch_operation_should_enqueue(op, stream->dq, data)) {
+ return;
+ }
+ bool no_ops = !_dispatch_stream_operation_avail(stream);
+ TAILQ_INSERT_TAIL(&stream->operations[op->params.type], op, operation_list);
+ if (no_ops) {
+ dispatch_async_f(stream->dq, stream, _dispatch_stream_handler);
+ }
+}
+
+static void
+_dispatch_disk_enqueue_operation(dispatch_disk_t disk, dispatch_operation_t op,
+ dispatch_data_t data)
+{
+ if (!_dispatch_operation_should_enqueue(op, disk->pick_queue, data)) {
+ return;
+ }
+ if (op->params.type == DISPATCH_IO_STREAM) {
+ if (TAILQ_EMPTY(&op->fd_entry->stream_ops)) {
+ TAILQ_INSERT_TAIL(&disk->operations, op, operation_list);
+ }
+ TAILQ_INSERT_TAIL(&op->fd_entry->stream_ops, op, stream_list);
+ } else {
+ TAILQ_INSERT_TAIL(&disk->operations, op, operation_list);
+ }
+ _dispatch_disk_handler(disk);
+}
+
+static void
+_dispatch_stream_complete_operation(dispatch_stream_t stream,
+ dispatch_operation_t op)
+{
+ // On stream queue
+ _dispatch_io_debug("complete operation", op->fd_entry->fd);
+ TAILQ_REMOVE(&stream->operations[op->params.type], op, operation_list);
+ if (op == stream->op) {
+ stream->op = NULL;
+ }
+ if (op->timer) {
+ dispatch_source_cancel(op->timer);
+ }
+ // Final release will deliver any pending data
+ _dispatch_release(op);
+}
+
+static void
+_dispatch_disk_complete_operation(dispatch_disk_t disk, dispatch_operation_t op)
+{
+ // On pick queue
+ _dispatch_io_debug("complete operation", op->fd_entry->fd);
+ // Current request is always the last op returned
+ if (disk->cur_rq == op) {
+ disk->cur_rq = TAILQ_PREV(op, dispatch_disk_operations_s,
+ operation_list);
+ }
+ if (op->params.type == DISPATCH_IO_STREAM) {
+ // Check if there are other pending stream operations behind it
+ dispatch_operation_t op_next = TAILQ_NEXT(op, stream_list);
+ TAILQ_REMOVE(&op->fd_entry->stream_ops, op, stream_list);
+ if (op_next) {
+ TAILQ_INSERT_TAIL(&disk->operations, op_next, operation_list);
+ }
+ }
+ TAILQ_REMOVE(&disk->operations, op, operation_list);
+ if (op->timer) {
+ dispatch_source_cancel(op->timer);
+ }
+ // Final release will deliver any pending data
+ _dispatch_release(op);
+}
+
+static dispatch_operation_t
+_dispatch_stream_pick_next_operation(dispatch_stream_t stream,
+ dispatch_operation_t op)
+{
+ // On stream queue
+ if (!op) {
+ // On the first run through, pick the first operation
+ if (!_dispatch_stream_operation_avail(stream)) {
+ return op;
+ }
+ if (!TAILQ_EMPTY(&stream->operations[DISPATCH_IO_STREAM])) {
+ op = TAILQ_FIRST(&stream->operations[DISPATCH_IO_STREAM]);
+ } else if (!TAILQ_EMPTY(&stream->operations[DISPATCH_IO_RANDOM])) {
+ op = TAILQ_FIRST(&stream->operations[DISPATCH_IO_RANDOM]);
+ }
+ return op;
+ }
+ if (op->params.type == DISPATCH_IO_STREAM) {
+ // Stream operations need to be serialized so continue the current
+ // operation until it is finished
+ return op;
+ }
+ // Get the next random operation (round-robin)
+ if (op->params.type == DISPATCH_IO_RANDOM) {
+ op = TAILQ_NEXT(op, operation_list);
+ if (!op) {
+ op = TAILQ_FIRST(&stream->operations[DISPATCH_IO_RANDOM]);
+ }
+ return op;
+ }
+ return NULL;
+}
+
+static dispatch_operation_t
+_dispatch_disk_pick_next_operation(dispatch_disk_t disk)
+{
+ // On pick queue
+ dispatch_operation_t op;
+ if (!TAILQ_EMPTY(&disk->operations)) {
+ if (disk->cur_rq == NULL) {
+ op = TAILQ_FIRST(&disk->operations);
+ } else {
+ op = disk->cur_rq;
+ do {
+ op = TAILQ_NEXT(op, operation_list);
+ if (!op) {
+ op = TAILQ_FIRST(&disk->operations);
+ }
+ // TODO: more involved picking algorithm rdar://problem/8780312
+ } while (op->active && op != disk->cur_rq);
+ }
+ if (!op->active) {
+ disk->cur_rq = op;
+ return op;
+ }
+ }
+ return NULL;
+}
+
+static void
+_dispatch_stream_cleanup_operations(dispatch_stream_t stream,
+ dispatch_io_t channel)
+{
+ // On stream queue
+ dispatch_operation_t op, tmp;
+ typeof(*stream->operations) *operations;
+ operations = &stream->operations[DISPATCH_IO_RANDOM];
+ TAILQ_FOREACH_SAFE(op, operations, operation_list, tmp) {
+ if (!channel || op->channel == channel) {
+ _dispatch_stream_complete_operation(stream, op);
+ }
+ }
+ operations = &stream->operations[DISPATCH_IO_STREAM];
+ TAILQ_FOREACH_SAFE(op, operations, operation_list, tmp) {
+ if (!channel || op->channel == channel) {
+ _dispatch_stream_complete_operation(stream, op);
+ }
+ }
+ if (stream->source_running && !_dispatch_stream_operation_avail(stream)) {
+ dispatch_suspend(stream->source);
+ stream->source_running = false;
+ }
+}
+
+static void
+_dispatch_disk_cleanup_operations(dispatch_disk_t disk, dispatch_io_t channel)
+{
+ // On pick queue
+ dispatch_operation_t op, tmp;
+ TAILQ_FOREACH_SAFE(op, &disk->operations, operation_list, tmp) {
+ if (!channel || op->channel == channel) {
+ _dispatch_disk_complete_operation(disk, op);
+ }
+ }
+}
+
+#pragma mark -
+#pragma mark dispatch_stream_handler/dispatch_disk_handler
+
+static dispatch_source_t
+_dispatch_stream_source(dispatch_stream_t stream, dispatch_operation_t op)
+{
+ // On stream queue
+ if (stream->source) {
+ return stream->source;
+ }
+ dispatch_fd_t fd = op->fd_entry->fd;
+ _dispatch_io_debug("stream source create", fd);
+ dispatch_source_t source = NULL;
+ if (op->direction == DOP_DIR_READ) {
+ source = dispatch_source_create(DISPATCH_SOURCE_TYPE_READ, fd, 0,
+ stream->dq);
+ } else if (op->direction == DOP_DIR_WRITE) {
+ source = dispatch_source_create(DISPATCH_SOURCE_TYPE_WRITE, fd, 0,
+ stream->dq);
+ } else {
+ dispatch_assert(op->direction < DOP_DIR_MAX);
+ return NULL;
+ }
+ dispatch_set_context(source, stream);
+ dispatch_source_set_event_handler_f(source,
+ _dispatch_stream_source_handler);
+ // Close queue must not run user cleanup handlers until sources are fully
+ // unregistered
+ dispatch_queue_t close_queue = op->fd_entry->close_queue;
+ dispatch_source_set_cancel_handler(source, ^{
+ _dispatch_io_debug("stream source cancel", fd);
+ dispatch_resume(close_queue);
+ });
+ stream->source = source;
+ return stream->source;
+}
+
+static void
+_dispatch_stream_source_handler(void *ctx)
+{
+ // On stream queue
+ dispatch_stream_t stream = (dispatch_stream_t)ctx;
+ dispatch_suspend(stream->source);
+ stream->source_running = false;
+ return _dispatch_stream_handler(stream);
+}
+
+static void
+_dispatch_stream_handler(void *ctx)
+{
+ // On stream queue
+ dispatch_stream_t stream = (dispatch_stream_t)ctx;
+ dispatch_operation_t op;
+pick:
+ op = _dispatch_stream_pick_next_operation(stream, stream->op);
+ if (!op) {
+ _dispatch_debug("no operation found: stream %p", stream);
+ return;
+ }
+ int err = _dispatch_io_get_error(op, NULL, true);
+ if (err) {
+ op->err = err;
+ _dispatch_stream_complete_operation(stream, op);
+ goto pick;
+ }
+ stream->op = op;
+ _dispatch_io_debug("stream handler", op->fd_entry->fd);
+ dispatch_fd_entry_t fd_entry = op->fd_entry;
+ _dispatch_fd_entry_retain(fd_entry);
+ // For performance analysis
+ if (!op->total && dispatch_io_defaults.initial_delivery) {
+ // Empty delivery to signal the start of the operation
+ _dispatch_io_debug("initial delivery", op->fd_entry->fd);
+ _dispatch_operation_deliver_data(op, DOP_DELIVER);
+ }
+ // TODO: perform on the operation target queue to get correct priority
+ int result = _dispatch_operation_perform(op), flags = -1;
+ switch (result) {
+ case DISPATCH_OP_DELIVER:
+ flags = DOP_DEFAULT;
+ // Fall through
+ case DISPATCH_OP_DELIVER_AND_COMPLETE:
+ flags = (flags != DOP_DEFAULT) ? DOP_DELIVER | DOP_NO_EMPTY :
+ DOP_DEFAULT;
+ _dispatch_operation_deliver_data(op, flags);
+ // Fall through
+ case DISPATCH_OP_COMPLETE:
+ if (flags != DOP_DEFAULT) {
+ _dispatch_stream_complete_operation(stream, op);
+ }
+ if (_dispatch_stream_operation_avail(stream)) {
+ dispatch_async_f(stream->dq, stream, _dispatch_stream_handler);
+ }
+ break;
+ case DISPATCH_OP_COMPLETE_RESUME:
+ _dispatch_stream_complete_operation(stream, op);
+ // Fall through
+ case DISPATCH_OP_RESUME:
+ if (_dispatch_stream_operation_avail(stream)) {
+ stream->source_running = true;
+ dispatch_resume(_dispatch_stream_source(stream, op));
+ }
+ break;
+ case DISPATCH_OP_ERR:
+ _dispatch_stream_cleanup_operations(stream, op->channel);
+ break;
+ case DISPATCH_OP_FD_ERR:
+ _dispatch_fd_entry_retain(fd_entry);
+ dispatch_async(fd_entry->barrier_queue, ^{
+ _dispatch_fd_entry_cleanup_operations(fd_entry, NULL);
+ _dispatch_fd_entry_release(fd_entry);
+ });
+ break;
+ default:
+ break;
+ }
+ _dispatch_fd_entry_release(fd_entry);
+ return;
+}
+
+static void
+_dispatch_disk_handler(void *ctx)
+{
+ // On pick queue
+ dispatch_disk_t disk = (dispatch_disk_t)ctx;
+ if (disk->io_active) {
+ return;
+ }
+ _dispatch_io_debug("disk handler", -1);
+ dispatch_operation_t op;
+ size_t i = disk->free_idx, j = disk->req_idx;
+ if (j <= i) {
+ j += disk->advise_list_depth;
+ }
+ while (i <= j) {
+ if ((!disk->advise_list[i%disk->advise_list_depth]) &&
+ (op = _dispatch_disk_pick_next_operation(disk))) {
+ int err = _dispatch_io_get_error(op, NULL, true);
+ if (err) {
+ op->err = err;
+ _dispatch_disk_complete_operation(disk, op);
+ continue;
+ }
+ _dispatch_retain(op);
+ disk->advise_list[i%disk->advise_list_depth] = op;
+ op->active = true;
+ } else {
+ // No more operations to get
+ break;
+ }
+ i++;
+ }
+ disk->free_idx = (i%disk->advise_list_depth);
+ op = disk->advise_list[disk->req_idx];
+ if (op) {
+ disk->io_active = true;
+ dispatch_async_f(op->do_targetq, disk, _dispatch_disk_perform);
+ }
+}
+
+static void
+_dispatch_disk_perform(void *ctxt)
+{
+ dispatch_disk_t disk = ctxt;
+ size_t chunk_size = dispatch_io_defaults.chunk_pages * PAGE_SIZE;
+ _dispatch_io_debug("disk perform", -1);
+ dispatch_operation_t op;
+ size_t i = disk->advise_idx, j = disk->free_idx;
+ if (j <= i) {
+ j += disk->advise_list_depth;
+ }
+ do {
+ op = disk->advise_list[i%disk->advise_list_depth];
+ if (!op) {
+ // Nothing more to advise, must be at free_idx
+ dispatch_assert(i%disk->advise_list_depth == disk->free_idx);
+ break;
+ }
+ if (op->direction == DOP_DIR_WRITE) {
+ // TODO: preallocate writes ? rdar://problem/9032172
+ continue;
+ }
+ if (op->fd_entry->fd == -1 && _dispatch_fd_entry_open(op->fd_entry,
+ op->channel)) {
+ continue;
+ }
+ // For performance analysis
+ if (!op->total && dispatch_io_defaults.initial_delivery) {
+ // Empty delivery to signal the start of the operation
+ _dispatch_io_debug("initial delivery", op->fd_entry->fd);
+ _dispatch_operation_deliver_data(op, DOP_DELIVER);
+ }
+ // Advise two chunks if the list only has one element and this is the
+ // first advise on the operation
+ if ((j-i) == 1 && !disk->advise_list[disk->free_idx] &&
+ !op->advise_offset) {
+ chunk_size *= 2;
+ }
+ _dispatch_operation_advise(op, chunk_size);
+ } while (++i < j);
+ disk->advise_idx = i%disk->advise_list_depth;
+ op = disk->advise_list[disk->req_idx];
+ int result = _dispatch_operation_perform(op);
+ disk->advise_list[disk->req_idx] = NULL;
+ disk->req_idx = (++disk->req_idx)%disk->advise_list_depth;
+ dispatch_async(disk->pick_queue, ^{
+ switch (result) {
+ case DISPATCH_OP_DELIVER:
+ _dispatch_operation_deliver_data(op, DOP_DELIVER);
+ break;
+ case DISPATCH_OP_COMPLETE:
+ _dispatch_disk_complete_operation(disk, op);
+ break;
+ case DISPATCH_OP_DELIVER_AND_COMPLETE:
+ _dispatch_operation_deliver_data(op, DOP_DELIVER);
+ _dispatch_disk_complete_operation(disk, op);
+ break;
+ case DISPATCH_OP_ERR:
+ _dispatch_disk_cleanup_operations(disk, op->channel);
+ break;
+ case DISPATCH_OP_FD_ERR:
+ _dispatch_disk_cleanup_operations(disk, NULL);
+ break;
+ default:
+ dispatch_assert(result);
+ break;
+ }
+ op->active = false;
+ disk->io_active = false;
+ _dispatch_disk_handler(disk);
+ // Balancing the retain in _dispatch_disk_handler. Note that op must be
+ // released at the very end, since it might hold the last reference to
+ // the disk
+ _dispatch_release(op);
+ });
+}
+
+#pragma mark -
+#pragma mark dispatch_operation_perform
+
+static void
+_dispatch_operation_advise(dispatch_operation_t op, size_t chunk_size)
+{
+ int err;
+ struct radvisory advise;
+ // No point in issuing a read advise for the next chunk if we are already
+ // a chunk ahead from reading the bytes
+ if (op->advise_offset > (off_t)((op->offset+op->total) + chunk_size +
+ PAGE_SIZE)) {
+ return;
+ }
+ advise.ra_count = (int)chunk_size;
+ if (!op->advise_offset) {
+ op->advise_offset = op->offset;
+ // If this is the first time through, align the advised range to a
+ // page boundary
+ size_t pg_fraction = (size_t)((op->offset + chunk_size) % PAGE_SIZE);
+ advise.ra_count += (int)(pg_fraction ? PAGE_SIZE - pg_fraction : 0);
+ }
+ advise.ra_offset = op->advise_offset;
+ op->advise_offset += advise.ra_count;
+ _dispatch_io_syscall_switch(err,
+ fcntl(op->fd_entry->fd, F_RDADVISE, &advise),
+ // TODO: set disk status on error
+ default: (void)dispatch_assume_zero(err); break;
+ );
+}
+
+static int
+_dispatch_operation_perform(dispatch_operation_t op)
+{
+ int err = _dispatch_io_get_error(op, NULL, true);
+ if (err) {
+ goto error;
+ }
+ if (!op->buf) {
+ size_t max_buf_siz = op->params.high;
+ size_t chunk_siz = dispatch_io_defaults.chunk_pages * PAGE_SIZE;
+ if (op->direction == DOP_DIR_READ) {
+ // If necessary, create a buffer for the ongoing operation, large
+ // enough to fit chunk_pages but at most high-water
+ size_t data_siz = dispatch_data_get_size(op->data);
+ if (data_siz) {
+ dispatch_assert(data_siz < max_buf_siz);
+ max_buf_siz -= data_siz;
+ }
+ if (max_buf_siz > chunk_siz) {
+ max_buf_siz = chunk_siz;
+ }
+ if (op->length < SIZE_MAX) {
+ op->buf_siz = op->length - op->total;
+ if (op->buf_siz > max_buf_siz) {
+ op->buf_siz = max_buf_siz;
+ }
+ } else {
+ op->buf_siz = max_buf_siz;
+ }
+ op->buf = valloc(op->buf_siz);
+ _dispatch_io_debug("buffer allocated", op->fd_entry->fd);
+ } else if (op->direction == DOP_DIR_WRITE) {
+ // Always write the first data piece, if that is smaller than a
+ // chunk, accumulate further data pieces until chunk size is reached
+ if (chunk_siz > max_buf_siz) {
+ chunk_siz = max_buf_siz;
+ }
+ op->buf_siz = 0;
+ dispatch_data_apply(op->data,
+ ^(dispatch_data_t region DISPATCH_UNUSED,
+ size_t offset DISPATCH_UNUSED,
+ const void* buf DISPATCH_UNUSED, size_t len) {
+ size_t siz = op->buf_siz + len;
+ if (!op->buf_siz || siz <= chunk_siz) {
+ op->buf_siz = siz;
+ }
+ return (bool)(siz < chunk_siz);
+ });
+ if (op->buf_siz > max_buf_siz) {
+ op->buf_siz = max_buf_siz;
+ }
+ dispatch_data_t d;
+ d = dispatch_data_create_subrange(op->data, 0, op->buf_siz);
+ op->buf_data = dispatch_data_create_map(d, (const void**)&op->buf,
+ NULL);
+ _dispatch_io_data_release(d);
+ _dispatch_io_debug("buffer mapped", op->fd_entry->fd);
+ }
+ }
+ if (op->fd_entry->fd == -1) {
+ err = _dispatch_fd_entry_open(op->fd_entry, op->channel);
+ if (err) {
+ goto error;
+ }
+ }
+ void *buf = op->buf + op->buf_len;
+ size_t len = op->buf_siz - op->buf_len;
+ off_t off = op->offset + op->total;
+ ssize_t processed = -1;
+syscall:
+ if (op->direction == DOP_DIR_READ) {
+ if (op->params.type == DISPATCH_IO_STREAM) {
+ processed = read(op->fd_entry->fd, buf, len);
+ } else if (op->params.type == DISPATCH_IO_RANDOM) {
+ processed = pread(op->fd_entry->fd, buf, len, off);
+ }
+ } else if (op->direction == DOP_DIR_WRITE) {
+ if (op->params.type == DISPATCH_IO_STREAM) {
+ processed = write(op->fd_entry->fd, buf, len);
+ } else if (op->params.type == DISPATCH_IO_RANDOM) {
+ processed = pwrite(op->fd_entry->fd, buf, len, off);
+ }
+ }
+ // Encountered an error on the file descriptor
+ if (processed == -1) {
+ err = errno;
+ if (err == EINTR) {
+ goto syscall;
+ }
+ goto error;
+ }
+ // EOF is indicated by two handler invocations
+ if (processed == 0) {
+ _dispatch_io_debug("EOF", op->fd_entry->fd);
+ return DISPATCH_OP_DELIVER_AND_COMPLETE;
+ }
+ op->buf_len += processed;
+ op->total += processed;
+ if (op->total == op->length) {
+ // Finished processing all the bytes requested by the operation
+ return DISPATCH_OP_COMPLETE;
+ } else {
+ // Deliver data only if we satisfy the filters
+ return DISPATCH_OP_DELIVER;
+ }
+error:
+ if (err == EAGAIN) {
+ // For disk based files with blocking I/O we should never get EAGAIN
+ dispatch_assert(!op->fd_entry->disk);
+ _dispatch_io_debug("EAGAIN %d", op->fd_entry->fd, err);
+ if (op->direction == DOP_DIR_READ && op->total &&
+ op->channel == op->fd_entry->convenience_channel) {
+ // Convenience read with available data completes on EAGAIN
+ return DISPATCH_OP_COMPLETE_RESUME;
+ }
+ return DISPATCH_OP_RESUME;
+ }
+ op->err = err;
+ switch (err) {
+ case ECANCELED:
+ return DISPATCH_OP_ERR;
+ case EBADF:
+ (void)dispatch_atomic_cmpxchg2o(op->fd_entry, err, 0, err);
+ return DISPATCH_OP_FD_ERR;
+ default:
+ return DISPATCH_OP_COMPLETE;
+ }
+}
+
+static void
+_dispatch_operation_deliver_data(dispatch_operation_t op,
+ dispatch_op_flags_t flags)
+{
+ // Either called from stream resp. pick queue or when op is finalized
+ dispatch_data_t data = NULL;
+ int err = 0;
+ size_t undelivered = op->undelivered + op->buf_len;
+ bool deliver = (flags & (DOP_DELIVER|DOP_DONE)) ||
+ (op->flags & DOP_DELIVER);
+ op->flags = DOP_DEFAULT;
+ if (!deliver) {
+ // Don't deliver data until low water mark has been reached
+ if (undelivered >= op->params.low) {
+ deliver = true;
+ } else if (op->buf_len < op->buf_siz) {
+ // Request buffer is not yet used up
+ _dispatch_io_debug("buffer data", op->fd_entry->fd);
+ return;
+ }
+ } else {
+ err = op->err;
+ if (!err && (op->channel->atomic_flags & DIO_STOPPED)) {
+ err = ECANCELED;
+ op->err = err;
+ }
+ }
+ // Deliver data or buffer used up
+ if (op->direction == DOP_DIR_READ) {
+ if (op->buf_len) {
+ void *buf = op->buf;
+ data = dispatch_data_create(buf, op->buf_len, NULL,
+ DISPATCH_DATA_DESTRUCTOR_FREE);
+ op->buf = NULL;
+ op->buf_len = 0;
+ dispatch_data_t d = dispatch_data_create_concat(op->data, data);
+ _dispatch_io_data_release(op->data);
+ _dispatch_io_data_release(data);
+ data = d;
+ } else {
+ data = op->data;
+ }
+ op->data = deliver ? dispatch_data_empty : data;
+ } else if (op->direction == DOP_DIR_WRITE) {
+ if (deliver) {
+ data = dispatch_data_create_subrange(op->data, op->buf_len,
+ op->length);
+ }
+ if (op->buf_len == op->buf_siz) {
+ _dispatch_io_data_release(op->buf_data);
+ op->buf_data = NULL;
+ op->buf = NULL;
+ op->buf_len = 0;
+ // Trim newly written buffer from head of unwritten data
+ dispatch_data_t d;
+ if (deliver) {
+ _dispatch_io_data_retain(data);
+ d = data;
+ } else {
+ d = dispatch_data_create_subrange(op->data, op->buf_len,
+ op->length);
+ }
+ _dispatch_io_data_release(op->data);
+ op->data = d;
+ }
+ } else {
+ dispatch_assert(op->direction < DOP_DIR_MAX);
+ return;
+ }
+ if (!deliver || ((flags & DOP_NO_EMPTY) && !dispatch_data_get_size(data))) {
+ op->undelivered = undelivered;
+ _dispatch_io_debug("buffer data", op->fd_entry->fd);
+ return;
+ }
+ op->undelivered = 0;
+ _dispatch_io_debug("deliver data", op->fd_entry->fd);
+ dispatch_op_direction_t direction = op->direction;
+ __block dispatch_data_t d = data;
+ dispatch_io_handler_t handler = op->handler;
+#if DISPATCH_IO_DEBUG
+ int fd = op->fd_entry->fd;
+#endif
+ dispatch_fd_entry_t fd_entry = op->fd_entry;
+ _dispatch_fd_entry_retain(fd_entry);
+ dispatch_io_t channel = op->channel;
+ _dispatch_retain(channel);
+ // Note that data delivery may occur after the operation is freed
+ dispatch_async(op->op_q, ^{
+ bool done = (flags & DOP_DONE);
+ if (done) {
+ if (direction == DOP_DIR_READ && err) {
+ if (dispatch_data_get_size(d)) {
+ _dispatch_io_debug("IO handler invoke", fd);
+ handler(false, d, 0);
+ }
+ d = NULL;
+ } else if (direction == DOP_DIR_WRITE && !err) {
+ d = NULL;
+ }
+ }
+ _dispatch_io_debug("IO handler invoke", fd);
+ handler(done, d, err);
+ _dispatch_release(channel);
+ _dispatch_fd_entry_release(fd_entry);
+ _dispatch_io_data_release(data);
+ });
+}
diff --git a/src/io_internal.h b/src/io_internal.h
new file mode 100644
index 0000000..c43bd75
--- /dev/null
+++ b/src/io_internal.h
@@ -0,0 +1,198 @@
+/*
+ * Copyright (c) 2009-2011 Apple Inc. All rights reserved.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_START@
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_END@
+ */
+
+/*
+ * IMPORTANT: This header file describes INTERNAL interfaces to libdispatch
+ * which are subject to change in future releases of Mac OS X. Any applications
+ * relying on these interfaces WILL break.
+ */
+
+#ifndef __DISPATCH_IO_INTERNAL__
+#define __DISPATCH_IO_INTERNAL__
+
+#ifndef __DISPATCH_INDIRECT__
+#error "Please #include <dispatch/dispatch.h> instead of this file directly."
+#include <dispatch/base.h> // for HeaderDoc
+#endif
+
+#define _DISPATCH_IO_LABEL_SIZE 16
+
+#ifndef DISPATCH_IO_DEBUG
+#define DISPATCH_IO_DEBUG 0
+#endif
+
+#if TARGET_OS_EMBEDDED // rdar://problem/9032036
+#define DIO_MAX_CHUNK_PAGES 128u // 512kB chunk size
+#else
+#define DIO_MAX_CHUNK_PAGES 256u // 1024kB chunk size
+#endif
+
+#define DIO_DEFAULT_LOW_WATER_CHUNKS 1u // default low-water mark
+#define DIO_MAX_PENDING_IO_REQS 6u // Pending I/O read advises
+
+typedef unsigned int dispatch_op_direction_t;
+enum {
+ DOP_DIR_READ = 0,
+ DOP_DIR_WRITE,
+ DOP_DIR_MAX,
+ DOP_DIR_IGNORE = UINT_MAX,
+};
+
+typedef unsigned int dispatch_op_flags_t;
+#define DOP_DEFAULT 0u // check conditions to determine delivery
+#define DOP_DELIVER 1u // always deliver operation
+#define DOP_DONE 2u // operation is done (implies deliver)
+#define DOP_STOP 4u // operation interrupted by chan stop (implies done)
+#define DOP_NO_EMPTY 8u // don't deliver empty data
+
+// dispatch_io_t atomic_flags
+#define DIO_CLOSED 1u // channel has been closed
+#define DIO_STOPPED 2u // channel has been stopped (implies closed)
+
+#define _dispatch_io_data_retain(x) dispatch_retain(x)
+#define _dispatch_io_data_release(x) dispatch_release(x)
+
+#if DISPATCH_IO_DEBUG
+#define _dispatch_io_debug(msg, fd, args...) \
+ _dispatch_debug("fd %d: " msg, (fd), ##args)
+#else
+#define _dispatch_io_debug(msg, fd, args...)
+#endif
+
+DISPATCH_DECL(dispatch_operation);
+DISPATCH_DECL(dispatch_disk);
+
+struct dispatch_stream_s {
+ dispatch_queue_t dq;
+ dispatch_source_t source;
+ dispatch_operation_t op;
+ bool source_running;
+ TAILQ_HEAD(, dispatch_operation_s) operations[2];
+};
+
+typedef struct dispatch_stream_s *dispatch_stream_t;
+
+struct dispatch_io_path_data_s {
+ dispatch_io_t channel;
+ int oflag;
+ mode_t mode;
+ size_t pathlen;
+ char path[];
+};
+
+typedef struct dispatch_io_path_data_s *dispatch_io_path_data_t;
+
+struct dispatch_stat_s {
+ dev_t dev;
+ mode_t mode;
+};
+
+struct dispatch_disk_vtable_s {
+ DISPATCH_VTABLE_HEADER(dispatch_disk_s);
+};
+
+struct dispatch_disk_s {
+ DISPATCH_STRUCT_HEADER(dispatch_disk_s, dispatch_disk_vtable_s);
+ dev_t dev;
+ TAILQ_HEAD(dispatch_disk_operations_s, dispatch_operation_s) operations;
+ dispatch_operation_t cur_rq;
+ dispatch_queue_t pick_queue;
+
+ size_t free_idx;
+ size_t req_idx;
+ size_t advise_idx;
+ bool io_active;
+ int err;
+ TAILQ_ENTRY(dispatch_disk_s) disk_list;
+ size_t advise_list_depth;
+ dispatch_operation_t advise_list[];
+};
+
+struct dispatch_fd_entry_s {
+ dispatch_fd_t fd;
+ dispatch_io_path_data_t path_data;
+ int orig_flags, orig_nosigpipe, err;
+ struct dispatch_stat_s stat;
+ dispatch_stream_t streams[2];
+ dispatch_disk_t disk;
+ dispatch_queue_t close_queue, barrier_queue;
+ dispatch_group_t barrier_group;
+ dispatch_io_t convenience_channel;
+ TAILQ_HEAD(, dispatch_operation_s) stream_ops;
+ TAILQ_ENTRY(dispatch_fd_entry_s) fd_list;
+};
+
+typedef struct dispatch_fd_entry_s *dispatch_fd_entry_t;
+
+typedef struct dispatch_io_param_s {
+ dispatch_io_type_t type; // STREAM OR RANDOM
+ size_t low;
+ size_t high;
+ uint64_t interval;
+ unsigned long interval_flags;
+} dispatch_io_param_s;
+
+struct dispatch_operation_vtable_s {
+ DISPATCH_VTABLE_HEADER(dispatch_operation_s);
+};
+
+struct dispatch_operation_s {
+ DISPATCH_STRUCT_HEADER(dispatch_operation_s, dispatch_operation_vtable_s);
+ dispatch_queue_t op_q;
+ dispatch_op_direction_t direction; // READ OR WRITE
+ dispatch_io_param_s params;
+ off_t offset;
+ size_t length;
+ int err;
+ dispatch_io_handler_t handler;
+ dispatch_io_t channel;
+ dispatch_fd_entry_t fd_entry;
+ dispatch_source_t timer;
+ bool active;
+ int count;
+ off_t advise_offset;
+ void* buf;
+ dispatch_op_flags_t flags;
+ size_t buf_siz, buf_len, undelivered, total;
+ dispatch_data_t buf_data, data;
+ TAILQ_ENTRY(dispatch_operation_s) operation_list;
+ // the request list in the fd_entry stream_ops
+ TAILQ_ENTRY(dispatch_operation_s) stream_list;
+};
+
+struct dispatch_io_vtable_s {
+ DISPATCH_VTABLE_HEADER(dispatch_io_s);
+};
+
+struct dispatch_io_s {
+ DISPATCH_STRUCT_HEADER(dispatch_io_s, dispatch_io_vtable_s);
+ dispatch_queue_t queue, barrier_queue;
+ dispatch_group_t barrier_group;
+ dispatch_io_param_s params;
+ dispatch_fd_entry_t fd_entry;
+ unsigned int atomic_flags;
+ dispatch_fd_t fd, fd_actual;
+ off_t f_ptr;
+ int err; // contains creation errors only
+};
+
+void _dispatch_io_set_target_queue(dispatch_io_t channel, dispatch_queue_t dq);
+
+#endif // __DISPATCH_IO_INTERNAL__
diff --git a/src/object.c b/src/object.c
index 13bc8ce..b84979b 100644
--- a/src/object.c
+++ b/src/object.c
@@ -1,65 +1,32 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2010 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
#include "internal.h"
void
-dispatch_debug(dispatch_object_t obj, const char *msg, ...)
-{
- va_list ap;
-
- va_start(ap, msg);
-
- dispatch_debugv(obj, msg, ap);
-
- va_end(ap);
-}
-
-void
-dispatch_debugv(dispatch_object_t dou, const char *msg, va_list ap)
-{
- char buf[4096];
- size_t offs;
-
- struct dispatch_object_s *obj = DO_CAST(dou);
-
- if (obj && obj->do_vtable->do_debug) {
- offs = dx_debug(obj, buf, sizeof(buf));
- } else {
- offs = snprintf(buf, sizeof(buf), "NULL vtable slot");
- }
-
- snprintf(buf + offs, sizeof(buf) - offs, ": %s", msg);
-
- _dispatch_logv(buf, ap);
-}
-
-void
dispatch_retain(dispatch_object_t dou)
{
- struct dispatch_object_s *obj = DO_CAST(dou);
-
- if (obj->do_xref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT) {
+ if (slowpath(dou._do->do_xref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT)) {
return; // global object
}
- if ((dispatch_atomic_inc(&obj->do_xref_cnt) - 1) == 0) {
+ if (slowpath((dispatch_atomic_inc2o(dou._do, do_xref_cnt) - 1) == 0)) {
DISPATCH_CLIENT_CRASH("Resurrection of an object");
}
}
@@ -67,12 +34,10 @@
void
_dispatch_retain(dispatch_object_t dou)
{
- struct dispatch_object_s *obj = DO_CAST(dou);
-
- if (obj->do_ref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT) {
+ if (slowpath(dou._do->do_ref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT)) {
return; // global object
}
- if ((dispatch_atomic_inc(&obj->do_ref_cnt) - 1) == 0) {
+ if (slowpath((dispatch_atomic_inc2o(dou._do, do_ref_cnt) - 1) == 0)) {
DISPATCH_CLIENT_CRASH("Resurrection of an object");
}
}
@@ -80,28 +45,23 @@
void
dispatch_release(dispatch_object_t dou)
{
- struct dispatch_object_s *obj = DO_CAST(dou);
-
- unsigned int oldval;
-
- if (obj->do_xref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT) {
+ if (slowpath(dou._do->do_xref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT)) {
return;
}
- oldval = dispatch_atomic_dec(&obj->do_xref_cnt) + 1;
-
- if (fastpath(oldval > 1)) {
+ unsigned int xref_cnt = dispatch_atomic_dec2o(dou._do, do_xref_cnt) + 1;
+ if (fastpath(xref_cnt > 1)) {
return;
}
- if (oldval == 1) {
- if ((uintptr_t)obj->do_vtable == (uintptr_t)&_dispatch_source_kevent_vtable) {
- return _dispatch_source_xref_release((dispatch_source_t)obj);
+ if (fastpath(xref_cnt == 1)) {
+ if (dou._do->do_vtable == (void*)&_dispatch_source_kevent_vtable) {
+ return _dispatch_source_xref_release(dou._ds);
}
- if (slowpath(DISPATCH_OBJECT_SUSPENDED(obj))) {
+ if (slowpath(DISPATCH_OBJECT_SUSPENDED(dou._do))) {
// Arguments for and against this assert are within 6705399
DISPATCH_CLIENT_CRASH("Release of a suspended object");
}
- return _dispatch_release(obj);
+ return _dispatch_release(dou._do);
}
DISPATCH_CLIENT_CRASH("Over-release of an object");
}
@@ -109,15 +69,13 @@
void
_dispatch_dispose(dispatch_object_t dou)
{
- struct dispatch_object_s *obj = DO_CAST(dou);
+ dispatch_queue_t tq = dou._do->do_targetq;
+ dispatch_function_t func = dou._do->do_finalizer;
+ void *ctxt = dou._do->do_ctxt;
- dispatch_queue_t tq = obj->do_targetq;
- dispatch_function_t func = obj->do_finalizer;
- void *ctxt = obj->do_ctxt;
+ dou._do->do_vtable = (void *)0x200;
- obj->do_vtable = (struct dispatch_object_vtable_s *)0x200;
-
- free(obj);
+ free(dou._do);
if (func && ctxt) {
dispatch_async_f(tq, ctxt, func);
@@ -128,28 +86,22 @@
void
_dispatch_release(dispatch_object_t dou)
{
- struct dispatch_object_s *obj = DO_CAST(dou);
-
- unsigned int oldval;
-
- if (obj->do_ref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT) {
+ if (slowpath(dou._do->do_ref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT)) {
return; // global object
}
- oldval = dispatch_atomic_dec(&obj->do_ref_cnt) + 1;
-
- if (fastpath(oldval > 1)) {
+ unsigned int ref_cnt = dispatch_atomic_dec2o(dou._do, do_ref_cnt) + 1;
+ if (fastpath(ref_cnt > 1)) {
return;
}
- if (oldval == 1) {
- if (obj->do_next != DISPATCH_OBJECT_LISTLESS) {
+ if (fastpath(ref_cnt == 1)) {
+ if (slowpath(dou._do->do_next != DISPATCH_OBJECT_LISTLESS)) {
DISPATCH_CRASH("release while enqueued");
}
- if (obj->do_xref_cnt) {
+ if (slowpath(dou._do->do_xref_cnt)) {
DISPATCH_CRASH("release while external references exist");
}
-
- return dx_dispose(obj);
+ return dx_dispose(dou._do);
}
DISPATCH_CRASH("over-release");
}
@@ -157,74 +109,78 @@
void *
dispatch_get_context(dispatch_object_t dou)
{
- struct dispatch_object_s *obj = DO_CAST(dou);
-
- return obj->do_ctxt;
+ return dou._do->do_ctxt;
}
void
dispatch_set_context(dispatch_object_t dou, void *context)
{
- struct dispatch_object_s *obj = DO_CAST(dou);
-
- if (obj->do_ref_cnt != DISPATCH_OBJECT_GLOBAL_REFCNT) {
- obj->do_ctxt = context;
+ if (dou._do->do_ref_cnt != DISPATCH_OBJECT_GLOBAL_REFCNT) {
+ dou._do->do_ctxt = context;
}
}
void
dispatch_set_finalizer_f(dispatch_object_t dou, dispatch_function_t finalizer)
{
- struct dispatch_object_s *obj = DO_CAST(dou);
-
- obj->do_finalizer = finalizer;
+ dou._do->do_finalizer = finalizer;
}
void
dispatch_suspend(dispatch_object_t dou)
{
- struct dispatch_object_s *obj = DO_CAST(dou);
-
- if (slowpath(obj->do_ref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT)) {
+ if (slowpath(dou._do->do_ref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT)) {
return;
}
- (void)dispatch_atomic_add(&obj->do_suspend_cnt, DISPATCH_OBJECT_SUSPEND_INTERVAL);
+ // rdar://8181908 explains why we need to do an internal retain at every
+ // suspension.
+ (void)dispatch_atomic_add2o(dou._do, do_suspend_cnt,
+ DISPATCH_OBJECT_SUSPEND_INTERVAL);
+ _dispatch_retain(dou._do);
+}
+
+DISPATCH_NOINLINE
+static void
+_dispatch_resume_slow(dispatch_object_t dou)
+{
+ _dispatch_wakeup(dou._do);
+ // Balancing the retain() done in suspend() for rdar://8181908
+ _dispatch_release(dou._do);
}
void
dispatch_resume(dispatch_object_t dou)
{
- struct dispatch_object_s *obj = DO_CAST(dou);
-
// Global objects cannot be suspended or resumed. This also has the
// side effect of saturating the suspend count of an object and
// guarding against resuming due to overflow.
- if (slowpath(obj->do_ref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT)) {
+ if (slowpath(dou._do->do_ref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT)) {
return;
}
-
- // Switch on the previous value of the suspend count. If the previous
+ // Check the previous value of the suspend count. If the previous
// value was a single suspend interval, the object should be resumed.
// If the previous value was less than the suspend interval, the object
// has been over-resumed.
- switch (dispatch_atomic_sub(&obj->do_suspend_cnt, DISPATCH_OBJECT_SUSPEND_INTERVAL) + DISPATCH_OBJECT_SUSPEND_INTERVAL) {
- case DISPATCH_OBJECT_SUSPEND_INTERVAL:
- _dispatch_wakeup(obj);
- break;
- case DISPATCH_OBJECT_SUSPEND_LOCK:
- case 0:
- DISPATCH_CLIENT_CRASH("Over-resume of an object");
- break;
- default:
- break;
+ unsigned int suspend_cnt = dispatch_atomic_sub2o(dou._do, do_suspend_cnt,
+ DISPATCH_OBJECT_SUSPEND_INTERVAL) +
+ DISPATCH_OBJECT_SUSPEND_INTERVAL;
+ if (fastpath(suspend_cnt > DISPATCH_OBJECT_SUSPEND_INTERVAL)) {
+ // Balancing the retain() done in suspend() for rdar://8181908
+ return _dispatch_release(dou._do);
}
+ if (fastpath(suspend_cnt == DISPATCH_OBJECT_SUSPEND_INTERVAL)) {
+ return _dispatch_resume_slow(dou);
+ }
+ DISPATCH_CLIENT_CRASH("Over-resume of an object");
}
size_t
-dispatch_object_debug_attr(dispatch_object_t dou, char* buf, size_t bufsiz)
+_dispatch_object_debug_attr(dispatch_object_t dou, char* buf, size_t bufsiz)
{
- struct dispatch_object_s *obj = DO_CAST(dou);
-
- return snprintf(buf, bufsiz, "refcnt = 0x%x, suspend_cnt = 0x%x, ",
- obj->do_ref_cnt, obj->do_suspend_cnt);
+ return snprintf(buf, bufsiz, "xrefcnt = 0x%x, refcnt = 0x%x, "
+ "suspend_cnt = 0x%x, locked = %d, ", dou._do->do_xref_cnt,
+ dou._do->do_ref_cnt,
+ dou._do->do_suspend_cnt / DISPATCH_OBJECT_SUSPEND_INTERVAL,
+ dou._do->do_suspend_cnt & 1);
}
+
diff --git a/src/object_internal.h b/src/object_internal.h
index 31b6caf..0627cfd 100644
--- a/src/object_internal.h
+++ b/src/object_internal.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2010 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -28,32 +28,42 @@
#define __DISPATCH_OBJECT_INTERNAL__
enum {
- _DISPATCH_CONTINUATION_TYPE = 0x00000, // meta-type for continuations
+ _DISPATCH_CONTINUATION_TYPE = 0x00000, // meta-type for continuations
_DISPATCH_QUEUE_TYPE = 0x10000, // meta-type for queues
_DISPATCH_SOURCE_TYPE = 0x20000, // meta-type for sources
_DISPATCH_SEMAPHORE_TYPE = 0x30000, // meta-type for semaphores
- _DISPATCH_ATTR_TYPE = 0x10000000, // meta-type for attribute structures
-
+ _DISPATCH_NODE_TYPE = 0x40000, // meta-type for data node
+ _DISPATCH_IO_TYPE = 0x50000, // meta-type for io channels
+ _DISPATCH_OPERATION_TYPE = 0x60000, // meta-type for io operations
+ _DISPATCH_DISK_TYPE = 0x70000, // meta-type for io disks
+ _DISPATCH_META_TYPE_MASK = 0xfff0000, // mask for object meta-types
+ _DISPATCH_ATTR_TYPE = 0x10000000, // meta-type for attributes
+
DISPATCH_CONTINUATION_TYPE = _DISPATCH_CONTINUATION_TYPE,
-
- DISPATCH_QUEUE_ATTR_TYPE = _DISPATCH_QUEUE_TYPE | _DISPATCH_ATTR_TYPE,
+
+ DISPATCH_DATA_TYPE = _DISPATCH_NODE_TYPE,
+
+ DISPATCH_IO_TYPE = _DISPATCH_IO_TYPE,
+ DISPATCH_OPERATION_TYPE = _DISPATCH_OPERATION_TYPE,
+ DISPATCH_DISK_TYPE = _DISPATCH_DISK_TYPE,
+
+ DISPATCH_QUEUE_ATTR_TYPE = _DISPATCH_QUEUE_TYPE |_DISPATCH_ATTR_TYPE,
DISPATCH_QUEUE_TYPE = 1 | _DISPATCH_QUEUE_TYPE,
DISPATCH_QUEUE_GLOBAL_TYPE = 2 | _DISPATCH_QUEUE_TYPE,
DISPATCH_QUEUE_MGR_TYPE = 3 | _DISPATCH_QUEUE_TYPE,
+ DISPATCH_QUEUE_SPECIFIC_TYPE = 4 | _DISPATCH_QUEUE_TYPE,
DISPATCH_SEMAPHORE_TYPE = _DISPATCH_SEMAPHORE_TYPE,
-
- DISPATCH_SOURCE_ATTR_TYPE = _DISPATCH_SOURCE_TYPE | _DISPATCH_ATTR_TYPE,
-
+
DISPATCH_SOURCE_KEVENT_TYPE = 1 | _DISPATCH_SOURCE_TYPE,
};
-#define DISPATCH_VTABLE_HEADER(x) \
- unsigned long const do_type; \
+#define DISPATCH_VTABLE_HEADER(x) \
+ unsigned long const do_type; \
const char *const do_kind; \
- size_t (*const do_debug)(struct x *, char *, size_t); \
- struct dispatch_queue_s *(*const do_invoke)(struct x *); \
+ size_t (*const do_debug)(struct x *, char *, size_t); \
+ struct dispatch_queue_s *(*const do_invoke)(struct x *); \
bool (*const do_probe)(struct x *); \
void (*const do_dispose)(struct x *)
@@ -64,34 +74,31 @@
#define dx_invoke(x) (x)->do_vtable->do_invoke(x)
#define dx_probe(x) (x)->do_vtable->do_probe(x)
-#define DISPATCH_STRUCT_HEADER(x, y) \
- const struct y *do_vtable; \
- struct x *volatile do_next; \
- unsigned int do_ref_cnt; \
- unsigned int do_xref_cnt; \
- unsigned int do_suspend_cnt; \
- struct dispatch_queue_s *do_targetq; \
+#define DISPATCH_STRUCT_HEADER(x, y) \
+ const struct y *do_vtable; \
+ struct x *volatile do_next; \
+ unsigned int do_ref_cnt; \
+ unsigned int do_xref_cnt; \
+ unsigned int do_suspend_cnt; \
+ struct dispatch_queue_s *do_targetq; \
void *do_ctxt; \
- dispatch_function_t do_finalizer
+ void *do_finalizer;
-#define DISPATCH_OBJECT_GLOBAL_REFCNT (~0u)
-#define DISPATCH_OBJECT_SUSPEND_LOCK 1u // "word and bit" must be a power of two to be safely subtracted
+#define DISPATCH_OBJECT_GLOBAL_REFCNT (~0u)
+// "word and bit" must be a power of two to be safely subtracted
+#define DISPATCH_OBJECT_SUSPEND_LOCK 1u
#define DISPATCH_OBJECT_SUSPEND_INTERVAL 2u
-#define DISPATCH_OBJECT_SUSPENDED(x) ((x)->do_suspend_cnt >= DISPATCH_OBJECT_SUSPEND_INTERVAL)
+#define DISPATCH_OBJECT_SUSPENDED(x) \
+ ((x)->do_suspend_cnt >= DISPATCH_OBJECT_SUSPEND_INTERVAL)
#ifdef __LP64__
// the bottom nibble must not be zero, the rest of the bits should be random
-// we sign extend the 64-bit version so that a better instruction encoding is generated on Intel
-#define DISPATCH_OBJECT_LISTLESS ((void *)0xffffffff89abcdef)
+// we sign extend the 64-bit version so that a better instruction encoding is
+// generated on Intel
+#define DISPATCH_OBJECT_LISTLESS ((void *)0xffffffff89abcdef)
#else
-#define DISPATCH_OBJECT_LISTLESS ((void *)0x89abcdef)
+#define DISPATCH_OBJECT_LISTLESS ((void *)0x89abcdef)
#endif
-#define _dispatch_trysuspend(x) __sync_bool_compare_and_swap(&(x)->do_suspend_cnt, 0, DISPATCH_OBJECT_SUSPEND_INTERVAL)
-// _dispatch_source_invoke() relies on this testing the whole suspend count
-// word, not just the lock bit. In other words, no point taking the lock
-// if the source is suspended or canceled.
-#define _dispatch_trylock(x) dispatch_atomic_cmpxchg(&(x)->do_suspend_cnt, 0, DISPATCH_OBJECT_SUSPEND_LOCK)
-
struct dispatch_object_vtable_s {
DISPATCH_VTABLE_HEADER(dispatch_object_s);
};
@@ -100,7 +107,8 @@
DISPATCH_STRUCT_HEADER(dispatch_object_s, dispatch_object_vtable_s);
};
-size_t dispatch_object_debug_attr(dispatch_object_t dou, char* buf, size_t bufsiz);
+size_t _dispatch_object_debug_attr(dispatch_object_t dou, char* buf,
+ size_t bufsiz);
void _dispatch_retain(dispatch_object_t dou);
void _dispatch_release(dispatch_object_t dou);
diff --git a/src/once.c b/src/once.c
index 63a352e..ab4a4e8 100644
--- a/src/once.c
+++ b/src/once.c
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -23,24 +23,37 @@
#undef dispatch_once
#undef dispatch_once_f
+
+struct _dispatch_once_waiter_s {
+ volatile struct _dispatch_once_waiter_s *volatile dow_next;
+ _dispatch_thread_semaphore_t dow_sema;
+};
+
+#define DISPATCH_ONCE_DONE ((struct _dispatch_once_waiter_s *)~0l)
+
#ifdef __BLOCKS__
void
-dispatch_once(dispatch_once_t *val, void (^block)(void))
+dispatch_once(dispatch_once_t *val, dispatch_block_t block)
{
- struct Block_basic *bb = (struct Block_basic *)(void *)block;
+ struct Block_basic *bb = (void *)block;
- dispatch_once_f(val, block, (dispatch_function_t)bb->Block_invoke);
+ dispatch_once_f(val, block, (void *)bb->Block_invoke);
}
#endif
DISPATCH_NOINLINE
void
-dispatch_once_f(dispatch_once_t *val, void *ctxt, void (*func)(void *))
+dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func)
{
- volatile long *vval = val;
+ struct _dispatch_once_waiter_s * volatile *vval =
+ (struct _dispatch_once_waiter_s**)val;
+ struct _dispatch_once_waiter_s dow = { NULL, 0 };
+ struct _dispatch_once_waiter_s *tail, *tmp;
+ _dispatch_thread_semaphore_t sema;
- if (dispatch_atomic_cmpxchg(val, 0l, 1l)) {
- func(ctxt);
+ if (dispatch_atomic_cmpxchg(vval, NULL, &dow)) {
+ dispatch_atomic_acquire_barrier();
+ _dispatch_client_callout(ctxt, func);
// The next barrier must be long and strong.
//
@@ -52,25 +65,25 @@
// The dispatch_once*() wrapper macro causes the callee's
// instruction stream to look like this (pseudo-RISC):
//
- // load r5, pred-addr
- // cmpi r5, -1
- // beq 1f
- // call dispatch_once*()
- // 1f:
- // load r6, data-addr
+ // load r5, pred-addr
+ // cmpi r5, -1
+ // beq 1f
+ // call dispatch_once*()
+ // 1f:
+ // load r6, data-addr
//
// May be re-ordered like so:
//
- // load r6, data-addr
- // load r5, pred-addr
- // cmpi r5, -1
- // beq 1f
- // call dispatch_once*()
- // 1f:
+ // load r6, data-addr
+ // load r5, pred-addr
+ // cmpi r5, -1
+ // beq 1f
+ // call dispatch_once*()
+ // 1f:
//
// Normally, a barrier on the read side is used to workaround
// the weakly ordered memory model. But barriers are expensive
- // and we only need to synchronize once! After func(ctxt)
+ // and we only need to synchronize once! After func(ctxt)
// completes, the predicate will be marked as "done" and the
// branch predictor will correctly skip the call to
// dispatch_once*().
@@ -91,14 +104,32 @@
//
// On some CPUs, the most fully synchronizing instruction might
// need to be issued.
-
- dispatch_atomic_barrier();
- *val = ~0l;
- } else {
- do {
- _dispatch_hardware_pause();
- } while (*vval != ~0l);
- dispatch_atomic_barrier();
+ dispatch_atomic_maximally_synchronizing_barrier();
+ //dispatch_atomic_release_barrier(); // assumed contained in above
+ tmp = dispatch_atomic_xchg(vval, DISPATCH_ONCE_DONE);
+ tail = &dow;
+ while (tail != tmp) {
+ while (!tmp->dow_next) {
+ _dispatch_hardware_pause();
+ }
+ sema = tmp->dow_sema;
+ tmp = (struct _dispatch_once_waiter_s*)tmp->dow_next;
+ _dispatch_thread_semaphore_signal(sema);
+ }
+ } else {
+ dow.dow_sema = _dispatch_get_thread_semaphore();
+ for (;;) {
+ tmp = *vval;
+ if (tmp == DISPATCH_ONCE_DONE) {
+ break;
+ }
+ dispatch_atomic_store_barrier();
+ if (dispatch_atomic_cmpxchg(vval, tmp, &dow)) {
+ dow.dow_next = tmp;
+ _dispatch_thread_semaphore_wait(dow.dow_sema);
+ }
+ }
+ _dispatch_put_thread_semaphore(dow.dow_sema);
}
}
diff --git a/src/protocol.defs b/src/protocol.defs
index e6bd400..bf5fe5b 100644
--- a/src/protocol.defs
+++ b/src/protocol.defs
@@ -1,45 +1,48 @@
/*
- * Copyright (c) 2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
-/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
- */
#include <mach/std_types.defs>
#include <mach/mach_types.defs>
-// '64' is used to align with Mach notifications and so that we don't fight with the notify symbols in Libsystem
+// '64' is used to align with Mach notifications and so that we don't fight
+// with the notify symbols in Libsystem
subsystem libdispatch_internal_protocol 64;
serverprefix _dispatch_;
userprefix _dispatch_send_;
-skip; /* was MACH_NOTIFY_FIRST: 64 */
+skip; /* was MACH_NOTIFY_FIRST: 64 */
/* MACH_NOTIFY_PORT_DELETED: 65 */
simpleroutine
mach_notify_port_deleted(
- _notify : mach_port_move_send_once_t;
- _name : mach_port_name_t
+ _notify : mach_port_move_send_once_t;
+ _name : mach_port_name_t
);
-skip; /* was MACH_NOTIFY_MSG_ACCEPTED: 66 */
+/* MACH_NOTIFY_SEND_POSSIBLE: 66 */
+simpleroutine
+mach_notify_send_possible(
+ _notify : mach_port_move_send_once_t;
+ _name : mach_port_name_t
+);
skip; /* was NOTIFY_OWNERSHIP_RIGHTS: 67 */
@@ -48,28 +51,28 @@
/* MACH_NOTIFY_PORT_DESTROYED: 69 */
simpleroutine
mach_notify_port_destroyed(
- _notify : mach_port_move_send_once_t;
- _rights : mach_port_move_receive_t
+ _notify : mach_port_move_send_once_t;
+ _rights : mach_port_move_receive_t
);
/* MACH_NOTIFY_NO_SENDERS: 70 */
simpleroutine
mach_notify_no_senders(
- _notify : mach_port_move_send_once_t;
- _mscnt : mach_port_mscount_t
+ _notify : mach_port_move_send_once_t;
+ _mscnt : mach_port_mscount_t
);
/* MACH_NOTIFY_SEND_ONCE: 71 */
simpleroutine
mach_notify_send_once(
- _notify : mach_port_move_send_once_t
+ _notify : mach_port_move_send_once_t
);
/* MACH_NOTIFY_DEAD_NAME: 72 */
simpleroutine
mach_notify_dead_name(
- _notify : mach_port_move_send_once_t;
- _name : mach_port_name_t
+ _notify : mach_port_move_send_once_t;
+ _name : mach_port_name_t
);
/* highly unlikely additional Mach notifications */
@@ -81,11 +84,11 @@
simpleroutine
wakeup_main_thread(
- _port : mach_port_t;
- WaitTime _waitTimeout : natural_t
+ _port : mach_port_t;
+ WaitTime _waitTimeout : natural_t
);
simpleroutine
consume_send_once_right(
- _port : mach_port_move_send_once_t
+ _port : mach_port_move_send_once_t
);
diff --git a/src/provider.d b/src/provider.d
new file mode 100644
index 0000000..59fe790
--- /dev/null
+++ b/src/provider.d
@@ -0,0 +1,42 @@
+/*
+ * Copyright (c) 2010 Apple Inc. All rights reserved.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_START@
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_END@
+ */
+
+typedef struct dispatch_object_s *dispatch_object_t;
+typedef struct dispatch_queue_s *dispatch_queue_t;
+typedef void (*dispatch_function_t)(void *);
+
+provider dispatch {
+ probe queue__push(dispatch_queue_t queue, const char *label,
+ dispatch_object_t item, const char *kind,
+ dispatch_function_t function, void *context);
+ probe queue__pop(dispatch_queue_t queue, const char *label,
+ dispatch_object_t item, const char *kind,
+ dispatch_function_t function, void *context);
+ probe callout__entry(dispatch_queue_t queue, const char *label,
+ dispatch_function_t function, void *context);
+ probe callout__return(dispatch_queue_t queue, const char *label,
+ dispatch_function_t function, void *context);
+};
+
+#pragma D attributes Evolving/Evolving/Common provider dispatch provider
+#pragma D attributes Private/Private/Common provider dispatch module
+#pragma D attributes Private/Private/Common provider dispatch function
+#pragma D attributes Evolving/Evolving/Common provider dispatch name
+#pragma D attributes Evolving/Evolving/Common provider dispatch args
diff --git a/src/queue.c b/src/queue.c
index 39f3794..595bac5 100644
--- a/src/queue.c
+++ b/src/queue.c
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -23,155 +23,45 @@
#include "protocol.h"
#endif
-static void _dispatch_queue_cleanup2(void);
-
-void
-dummy_function(void)
-{
-}
-
-long
-dummy_function_r0(void)
-{
- return 0;
-}
-
-
-static struct dispatch_semaphore_s _dispatch_thread_mediator[] = {
- {
- .do_vtable = &_dispatch_semaphore_vtable,
- .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
- .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
- },
- {
- .do_vtable = &_dispatch_semaphore_vtable,
- .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
- .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
- },
- {
- .do_vtable = &_dispatch_semaphore_vtable,
- .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
- .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
- },
- {
- .do_vtable = &_dispatch_semaphore_vtable,
- .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
- .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
- },
- {
- .do_vtable = &_dispatch_semaphore_vtable,
- .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
- .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
- },
- {
- .do_vtable = &_dispatch_semaphore_vtable,
- .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
- .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
- },
-};
-
-static inline dispatch_queue_t
-_dispatch_get_root_queue(long priority, bool overcommit)
-{
- if (overcommit) switch (priority) {
- case DISPATCH_QUEUE_PRIORITY_LOW:
- return &_dispatch_root_queues[1];
- case DISPATCH_QUEUE_PRIORITY_DEFAULT:
- return &_dispatch_root_queues[3];
- case DISPATCH_QUEUE_PRIORITY_HIGH:
- return &_dispatch_root_queues[5];
- }
- switch (priority) {
- case DISPATCH_QUEUE_PRIORITY_LOW:
- return &_dispatch_root_queues[0];
- case DISPATCH_QUEUE_PRIORITY_DEFAULT:
- return &_dispatch_root_queues[2];
- case DISPATCH_QUEUE_PRIORITY_HIGH:
- return &_dispatch_root_queues[4];
- default:
- return NULL;
- }
-}
-
-#ifdef __BLOCKS__
-dispatch_block_t
-_dispatch_Block_copy(dispatch_block_t db)
-{
- dispatch_block_t rval;
-
- while (!(rval = Block_copy(db))) {
- sleep(1);
- }
-
- return rval;
-}
-#define _dispatch_Block_copy(x) ((typeof(x))_dispatch_Block_copy(x))
-
-void
-_dispatch_call_block_and_release(void *block)
-{
- void (^b)(void) = block;
- b();
- Block_release(b);
-}
-
-void
-_dispatch_call_block_and_release2(void *block, void *ctxt)
-{
- void (^b)(void*) = block;
- b(ctxt);
- Block_release(b);
-}
-
-#endif /* __BLOCKS__ */
-
-struct dispatch_queue_attr_vtable_s {
- DISPATCH_VTABLE_HEADER(dispatch_queue_attr_s);
-};
-
-struct dispatch_queue_attr_s {
- DISPATCH_STRUCT_HEADER(dispatch_queue_attr_s, dispatch_queue_attr_vtable_s);
-
-#ifndef DISPATCH_NO_LEGACY
- // Public:
- int qa_priority;
- void* finalizer_ctxt;
- dispatch_queue_finalizer_function_t finalizer_func;
-
- // Private:
- unsigned long qa_flags;
+#if (!HAVE_PTHREAD_WORKQUEUES || DISPATCH_DEBUG) && \
+ !defined(DISPATCH_ENABLE_THREAD_POOL)
+#define DISPATCH_ENABLE_THREAD_POOL 1
#endif
-};
-static int _dispatch_pthread_sigmask(int how, sigset_t *set, sigset_t *oset);
-
-#define _dispatch_queue_trylock(dq) dispatch_atomic_cmpxchg(&(dq)->dq_running, 0, 1)
-static inline void _dispatch_queue_unlock(dispatch_queue_t dq);
-static void _dispatch_queue_invoke(dispatch_queue_t dq);
+static void _dispatch_cache_cleanup(void *value);
+static void _dispatch_async_f_redirect(dispatch_queue_t dq,
+ dispatch_continuation_t dc);
+static void _dispatch_queue_cleanup(void *ctxt);
static bool _dispatch_queue_wakeup_global(dispatch_queue_t dq);
-static struct dispatch_object_s *_dispatch_queue_concurrent_drain_one(dispatch_queue_t dq);
-
-static bool _dispatch_program_is_probably_callback_driven;
+static void _dispatch_queue_drain(dispatch_queue_t dq);
+static inline _dispatch_thread_semaphore_t
+ _dispatch_queue_drain_one_barrier_sync(dispatch_queue_t dq);
+static void _dispatch_worker_thread2(void *context);
+#if DISPATCH_ENABLE_THREAD_POOL
+static void *_dispatch_worker_thread(void *context);
+static int _dispatch_pthread_sigmask(int how, sigset_t *set, sigset_t *oset);
+#endif
+static bool _dispatch_mgr_wakeup(dispatch_queue_t dq);
+static dispatch_queue_t _dispatch_mgr_thread(dispatch_queue_t dq);
#if DISPATCH_COCOA_COMPAT
-void (*dispatch_begin_thread_4GC)(void) = dummy_function;
-void (*dispatch_end_thread_4GC)(void) = dummy_function;
-void *(*_dispatch_begin_NSAutoReleasePool)(void) = (void *)dummy_function;
-void (*_dispatch_end_NSAutoReleasePool)(void *) = (void *)dummy_function;
-static void _dispatch_queue_wakeup_main(void);
-
+static unsigned int _dispatch_worker_threads;
static dispatch_once_t _dispatch_main_q_port_pred;
-static bool main_q_is_draining;
static mach_port_t main_q_port;
+
+static void _dispatch_main_q_port_init(void *ctxt);
+static void _dispatch_queue_wakeup_main(void);
+static void _dispatch_main_queue_drain(void);
#endif
-static void _dispatch_cache_cleanup2(void *value);
+#pragma mark -
+#pragma mark dispatch_queue_vtable
-static const struct dispatch_queue_vtable_s _dispatch_queue_vtable = {
+const struct dispatch_queue_vtable_s _dispatch_queue_vtable = {
.do_type = DISPATCH_QUEUE_TYPE,
.do_kind = "queue",
.do_dispose = _dispatch_queue_dispose,
- .do_invoke = (void *)dummy_function_r0,
+ .do_invoke = NULL,
.do_probe = (void *)dummy_function_r0,
.do_debug = dispatch_queue_debug,
};
@@ -183,6 +73,77 @@
.do_probe = _dispatch_queue_wakeup_global,
};
+static const struct dispatch_queue_vtable_s _dispatch_queue_mgr_vtable = {
+ .do_type = DISPATCH_QUEUE_MGR_TYPE,
+ .do_kind = "mgr-queue",
+ .do_invoke = _dispatch_mgr_thread,
+ .do_debug = dispatch_queue_debug,
+ .do_probe = _dispatch_mgr_wakeup,
+};
+
+#pragma mark -
+#pragma mark dispatch_root_queue
+
+#if HAVE_PTHREAD_WORKQUEUES
+static const int _dispatch_root_queue_wq_priorities[] = {
+ [DISPATCH_ROOT_QUEUE_IDX_LOW_PRIORITY] = WORKQ_LOW_PRIOQUEUE,
+ [DISPATCH_ROOT_QUEUE_IDX_LOW_OVERCOMMIT_PRIORITY] = WORKQ_LOW_PRIOQUEUE,
+ [DISPATCH_ROOT_QUEUE_IDX_DEFAULT_PRIORITY] = WORKQ_DEFAULT_PRIOQUEUE,
+ [DISPATCH_ROOT_QUEUE_IDX_DEFAULT_OVERCOMMIT_PRIORITY] =
+ WORKQ_DEFAULT_PRIOQUEUE,
+ [DISPATCH_ROOT_QUEUE_IDX_HIGH_PRIORITY] = WORKQ_HIGH_PRIOQUEUE,
+ [DISPATCH_ROOT_QUEUE_IDX_HIGH_OVERCOMMIT_PRIORITY] = WORKQ_HIGH_PRIOQUEUE,
+ [DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_PRIORITY] = WORKQ_BG_PRIOQUEUE,
+ [DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_OVERCOMMIT_PRIORITY] =
+ WORKQ_BG_PRIOQUEUE,
+};
+#endif
+
+#if DISPATCH_ENABLE_THREAD_POOL
+static struct dispatch_semaphore_s _dispatch_thread_mediator[] = {
+ [DISPATCH_ROOT_QUEUE_IDX_LOW_PRIORITY] = {
+ .do_vtable = &_dispatch_semaphore_vtable,
+ .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ },
+ [DISPATCH_ROOT_QUEUE_IDX_LOW_OVERCOMMIT_PRIORITY] = {
+ .do_vtable = &_dispatch_semaphore_vtable,
+ .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ },
+ [DISPATCH_ROOT_QUEUE_IDX_DEFAULT_PRIORITY] = {
+ .do_vtable = &_dispatch_semaphore_vtable,
+ .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ },
+ [DISPATCH_ROOT_QUEUE_IDX_DEFAULT_OVERCOMMIT_PRIORITY] = {
+ .do_vtable = &_dispatch_semaphore_vtable,
+ .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ },
+ [DISPATCH_ROOT_QUEUE_IDX_HIGH_PRIORITY] = {
+ .do_vtable = &_dispatch_semaphore_vtable,
+ .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ },
+ [DISPATCH_ROOT_QUEUE_IDX_HIGH_OVERCOMMIT_PRIORITY] = {
+ .do_vtable = &_dispatch_semaphore_vtable,
+ .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ },
+ [DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_PRIORITY] = {
+ .do_vtable = &_dispatch_semaphore_vtable,
+ .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ },
+ [DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_OVERCOMMIT_PRIORITY] = {
+ .do_vtable = &_dispatch_semaphore_vtable,
+ .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ },
+};
+#endif
+
#define MAX_THREAD_COUNT 255
struct dispatch_root_queue_context_s {
@@ -190,237 +151,206 @@
pthread_workqueue_t dgq_kworkqueue;
#endif
uint32_t dgq_pending;
+#if DISPATCH_ENABLE_THREAD_POOL
uint32_t dgq_thread_pool_size;
dispatch_semaphore_t dgq_thread_mediator;
+#endif
};
static struct dispatch_root_queue_context_s _dispatch_root_queue_contexts[] = {
- {
- .dgq_thread_mediator = &_dispatch_thread_mediator[0],
+ [DISPATCH_ROOT_QUEUE_IDX_LOW_PRIORITY] = {
+#if DISPATCH_ENABLE_THREAD_POOL
+ .dgq_thread_mediator = &_dispatch_thread_mediator[
+ DISPATCH_ROOT_QUEUE_IDX_LOW_PRIORITY],
.dgq_thread_pool_size = MAX_THREAD_COUNT,
+#endif
},
- {
- .dgq_thread_mediator = &_dispatch_thread_mediator[1],
+ [DISPATCH_ROOT_QUEUE_IDX_LOW_OVERCOMMIT_PRIORITY] = {
+#if DISPATCH_ENABLE_THREAD_POOL
+ .dgq_thread_mediator = &_dispatch_thread_mediator[
+ DISPATCH_ROOT_QUEUE_IDX_LOW_OVERCOMMIT_PRIORITY],
.dgq_thread_pool_size = MAX_THREAD_COUNT,
+#endif
},
- {
- .dgq_thread_mediator = &_dispatch_thread_mediator[2],
+ [DISPATCH_ROOT_QUEUE_IDX_DEFAULT_PRIORITY] = {
+#if DISPATCH_ENABLE_THREAD_POOL
+ .dgq_thread_mediator = &_dispatch_thread_mediator[
+ DISPATCH_ROOT_QUEUE_IDX_DEFAULT_PRIORITY],
.dgq_thread_pool_size = MAX_THREAD_COUNT,
+#endif
},
- {
- .dgq_thread_mediator = &_dispatch_thread_mediator[3],
+ [DISPATCH_ROOT_QUEUE_IDX_DEFAULT_OVERCOMMIT_PRIORITY] = {
+#if DISPATCH_ENABLE_THREAD_POOL
+ .dgq_thread_mediator = &_dispatch_thread_mediator[
+ DISPATCH_ROOT_QUEUE_IDX_DEFAULT_OVERCOMMIT_PRIORITY],
.dgq_thread_pool_size = MAX_THREAD_COUNT,
+#endif
},
- {
- .dgq_thread_mediator = &_dispatch_thread_mediator[4],
+ [DISPATCH_ROOT_QUEUE_IDX_HIGH_PRIORITY] = {
+#if DISPATCH_ENABLE_THREAD_POOL
+ .dgq_thread_mediator = &_dispatch_thread_mediator[
+ DISPATCH_ROOT_QUEUE_IDX_HIGH_PRIORITY],
.dgq_thread_pool_size = MAX_THREAD_COUNT,
+#endif
},
- {
- .dgq_thread_mediator = &_dispatch_thread_mediator[5],
+ [DISPATCH_ROOT_QUEUE_IDX_HIGH_OVERCOMMIT_PRIORITY] = {
+#if DISPATCH_ENABLE_THREAD_POOL
+ .dgq_thread_mediator = &_dispatch_thread_mediator[
+ DISPATCH_ROOT_QUEUE_IDX_HIGH_OVERCOMMIT_PRIORITY],
.dgq_thread_pool_size = MAX_THREAD_COUNT,
+#endif
+ },
+ [DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_PRIORITY] = {
+#if DISPATCH_ENABLE_THREAD_POOL
+ .dgq_thread_mediator = &_dispatch_thread_mediator[
+ DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_PRIORITY],
+ .dgq_thread_pool_size = MAX_THREAD_COUNT,
+#endif
+ },
+ [DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_OVERCOMMIT_PRIORITY] = {
+#if DISPATCH_ENABLE_THREAD_POOL
+ .dgq_thread_mediator = &_dispatch_thread_mediator[
+ DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_OVERCOMMIT_PRIORITY],
+ .dgq_thread_pool_size = MAX_THREAD_COUNT,
+#endif
},
};
-// 6618342 Contact the team that owns the Instrument DTrace probe before renaming this symbol
+// 6618342 Contact the team that owns the Instrument DTrace probe before
+// renaming this symbol
// dq_running is set to 2 so that barrier operations go through the slow path
+DISPATCH_CACHELINE_ALIGN
struct dispatch_queue_s _dispatch_root_queues[] = {
- {
+ [DISPATCH_ROOT_QUEUE_IDX_LOW_PRIORITY] = {
.do_vtable = &_dispatch_queue_root_vtable,
.do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
.do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
.do_suspend_cnt = DISPATCH_OBJECT_SUSPEND_LOCK,
- .do_ctxt = &_dispatch_root_queue_contexts[0],
+ .do_ctxt = &_dispatch_root_queue_contexts[
+ DISPATCH_ROOT_QUEUE_IDX_LOW_PRIORITY],
.dq_label = "com.apple.root.low-priority",
.dq_running = 2,
.dq_width = UINT32_MAX,
.dq_serialnum = 4,
},
- {
+ [DISPATCH_ROOT_QUEUE_IDX_LOW_OVERCOMMIT_PRIORITY] = {
.do_vtable = &_dispatch_queue_root_vtable,
.do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
.do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
.do_suspend_cnt = DISPATCH_OBJECT_SUSPEND_LOCK,
- .do_ctxt = &_dispatch_root_queue_contexts[1],
+ .do_ctxt = &_dispatch_root_queue_contexts[
+ DISPATCH_ROOT_QUEUE_IDX_LOW_OVERCOMMIT_PRIORITY],
.dq_label = "com.apple.root.low-overcommit-priority",
.dq_running = 2,
.dq_width = UINT32_MAX,
.dq_serialnum = 5,
},
- {
+ [DISPATCH_ROOT_QUEUE_IDX_DEFAULT_PRIORITY] = {
.do_vtable = &_dispatch_queue_root_vtable,
.do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
.do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
.do_suspend_cnt = DISPATCH_OBJECT_SUSPEND_LOCK,
- .do_ctxt = &_dispatch_root_queue_contexts[2],
+ .do_ctxt = &_dispatch_root_queue_contexts[
+ DISPATCH_ROOT_QUEUE_IDX_DEFAULT_PRIORITY],
.dq_label = "com.apple.root.default-priority",
.dq_running = 2,
.dq_width = UINT32_MAX,
.dq_serialnum = 6,
},
- {
+ [DISPATCH_ROOT_QUEUE_IDX_DEFAULT_OVERCOMMIT_PRIORITY] = {
.do_vtable = &_dispatch_queue_root_vtable,
.do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
.do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
.do_suspend_cnt = DISPATCH_OBJECT_SUSPEND_LOCK,
- .do_ctxt = &_dispatch_root_queue_contexts[3],
+ .do_ctxt = &_dispatch_root_queue_contexts[
+ DISPATCH_ROOT_QUEUE_IDX_DEFAULT_OVERCOMMIT_PRIORITY],
.dq_label = "com.apple.root.default-overcommit-priority",
.dq_running = 2,
.dq_width = UINT32_MAX,
.dq_serialnum = 7,
},
- {
+ [DISPATCH_ROOT_QUEUE_IDX_HIGH_PRIORITY] = {
.do_vtable = &_dispatch_queue_root_vtable,
.do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
.do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
.do_suspend_cnt = DISPATCH_OBJECT_SUSPEND_LOCK,
- .do_ctxt = &_dispatch_root_queue_contexts[4],
+ .do_ctxt = &_dispatch_root_queue_contexts[
+ DISPATCH_ROOT_QUEUE_IDX_HIGH_PRIORITY],
.dq_label = "com.apple.root.high-priority",
.dq_running = 2,
.dq_width = UINT32_MAX,
.dq_serialnum = 8,
},
- {
+ [DISPATCH_ROOT_QUEUE_IDX_HIGH_OVERCOMMIT_PRIORITY] = {
.do_vtable = &_dispatch_queue_root_vtable,
.do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
.do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
.do_suspend_cnt = DISPATCH_OBJECT_SUSPEND_LOCK,
- .do_ctxt = &_dispatch_root_queue_contexts[5],
+ .do_ctxt = &_dispatch_root_queue_contexts[
+ DISPATCH_ROOT_QUEUE_IDX_HIGH_OVERCOMMIT_PRIORITY],
.dq_label = "com.apple.root.high-overcommit-priority",
.dq_running = 2,
.dq_width = UINT32_MAX,
.dq_serialnum = 9,
},
+ [DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_PRIORITY] = {
+ .do_vtable = &_dispatch_queue_root_vtable,
+ .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_suspend_cnt = DISPATCH_OBJECT_SUSPEND_LOCK,
+ .do_ctxt = &_dispatch_root_queue_contexts[
+ DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_PRIORITY],
+
+ .dq_label = "com.apple.root.background-priority",
+ .dq_running = 2,
+ .dq_width = UINT32_MAX,
+ .dq_serialnum = 10,
+ },
+ [DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_OVERCOMMIT_PRIORITY] = {
+ .do_vtable = &_dispatch_queue_root_vtable,
+ .do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
+ .do_suspend_cnt = DISPATCH_OBJECT_SUSPEND_LOCK,
+ .do_ctxt = &_dispatch_root_queue_contexts[
+ DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_OVERCOMMIT_PRIORITY],
+
+ .dq_label = "com.apple.root.background-overcommit-priority",
+ .dq_running = 2,
+ .dq_width = UINT32_MAX,
+ .dq_serialnum = 11,
+ },
};
-// 6618342 Contact the team that owns the Instrument DTrace probe before renaming this symbol
-struct dispatch_queue_s _dispatch_main_q = {
- .do_vtable = &_dispatch_queue_vtable,
+// 6618342 Contact the team that owns the Instrument DTrace probe before
+// renaming this symbol
+DISPATCH_CACHELINE_ALIGN
+struct dispatch_queue_s _dispatch_mgr_q = {
+ .do_vtable = &_dispatch_queue_mgr_vtable,
.do_ref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
.do_xref_cnt = DISPATCH_OBJECT_GLOBAL_REFCNT,
.do_suspend_cnt = DISPATCH_OBJECT_SUSPEND_LOCK,
- .do_targetq = &_dispatch_root_queues[DISPATCH_ROOT_QUEUE_COUNT / 2],
+ .do_targetq = &_dispatch_root_queues[
+ DISPATCH_ROOT_QUEUE_IDX_HIGH_OVERCOMMIT_PRIORITY],
- .dq_label = "com.apple.main-thread",
- .dq_running = 1,
+ .dq_label = "com.apple.libdispatch-manager",
.dq_width = 1,
- .dq_serialnum = 1,
+ .dq_serialnum = 2,
};
-#if DISPATCH_PERF_MON
-static OSSpinLock _dispatch_stats_lock;
-static size_t _dispatch_bad_ratio;
-static struct {
- uint64_t time_total;
- uint64_t count_total;
- uint64_t thread_total;
-} _dispatch_stats[65]; // ffs*/fls*() returns zero when no bits are set
-static void _dispatch_queue_merge_stats(uint64_t start);
-#endif
-
-static void *_dispatch_worker_thread(void *context);
-static void _dispatch_worker_thread2(void *context);
-
-malloc_zone_t *_dispatch_ccache_zone;
-
-static inline void
-_dispatch_continuation_free(dispatch_continuation_t dc)
+dispatch_queue_t
+dispatch_get_global_queue(long priority, unsigned long flags)
{
- dispatch_continuation_t prev_dc = _dispatch_thread_getspecific(dispatch_cache_key);
- dc->do_next = prev_dc;
- _dispatch_thread_setspecific(dispatch_cache_key, dc);
-}
-
-static inline void
-_dispatch_continuation_pop(dispatch_object_t dou)
-{
- dispatch_continuation_t dc = dou._dc;
- dispatch_group_t dg;
-
- if (DISPATCH_OBJ_IS_VTABLE(dou._do)) {
- return _dispatch_queue_invoke(dou._dq);
- }
-
- // Add the item back to the cache before calling the function. This
- // allows the 'hot' continuation to be used for a quick callback.
- //
- // The ccache version is per-thread.
- // Therefore, the object has not been reused yet.
- // This generates better assembly.
- if ((long)dou._do->do_vtable & DISPATCH_OBJ_ASYNC_BIT) {
- _dispatch_continuation_free(dc);
- }
- if ((long)dou._do->do_vtable & DISPATCH_OBJ_GROUP_BIT) {
- dg = dc->dc_group;
- } else {
- dg = NULL;
- }
- dc->dc_func(dc->dc_ctxt);
- if (dg) {
- dispatch_group_leave(dg);
- _dispatch_release(dg);
- }
-}
-
-struct dispatch_object_s *
-_dispatch_queue_concurrent_drain_one(dispatch_queue_t dq)
-{
- struct dispatch_object_s *head, *next, *const mediator = (void *)~0ul;
-
- // The mediator value acts both as a "lock" and a signal
- head = dispatch_atomic_xchg(&dq->dq_items_head, mediator);
-
- if (slowpath(head == NULL)) {
- // The first xchg on the tail will tell the enqueueing thread that it
- // is safe to blindly write out to the head pointer. A cmpxchg honors
- // the algorithm.
- (void)dispatch_atomic_cmpxchg(&dq->dq_items_head, mediator, NULL);
- _dispatch_debug("no work on global work queue");
+ if (flags & ~DISPATCH_QUEUE_OVERCOMMIT) {
return NULL;
}
-
- if (slowpath(head == mediator)) {
- // This thread lost the race for ownership of the queue.
- //
- // The ratio of work to libdispatch overhead must be bad. This
- // scenario implies that there are too many threads in the pool.
- // Create a new pending thread and then exit this thread.
- // The kernel will grant a new thread when the load subsides.
- _dispatch_debug("Contention on queue: %p", dq);
- _dispatch_queue_wakeup_global(dq);
-#if DISPATCH_PERF_MON
- dispatch_atomic_inc(&_dispatch_bad_ratio);
-#endif
- return NULL;
- }
-
- // Restore the head pointer to a sane value before returning.
- // If 'next' is NULL, then this item _might_ be the last item.
- next = fastpath(head->do_next);
-
- if (slowpath(!next)) {
- dq->dq_items_head = NULL;
-
- if (dispatch_atomic_cmpxchg(&dq->dq_items_tail, head, NULL)) {
- // both head and tail are NULL now
- goto out;
- }
-
- // There must be a next item now. This thread won't wait long.
- while (!(next = head->do_next)) {
- _dispatch_hardware_pause();
- }
- }
-
- dq->dq_items_head = next;
- _dispatch_queue_wakeup_global(dq);
-out:
- return head;
+ return _dispatch_get_root_queue(priority,
+ flags & DISPATCH_QUEUE_OVERCOMMIT);
}
dispatch_queue_t
@@ -429,77 +359,256 @@
return _dispatch_queue_get_current() ?: _dispatch_get_root_queue(0, true);
}
-#undef dispatch_get_main_queue
-__OSX_AVAILABLE_STARTING(__MAC_10_6,__IPHONE_NA)
-dispatch_queue_t dispatch_get_main_queue(void);
-
-dispatch_queue_t
-dispatch_get_main_queue(void)
-{
- return &_dispatch_main_q;
-}
-#define dispatch_get_main_queue() (&_dispatch_main_q)
-
-struct _dispatch_hw_config_s _dispatch_hw_config;
+#pragma mark -
+#pragma mark dispatch_init
static void
-_dispatch_queue_set_width_init(void)
+_dispatch_hw_config_init(void)
{
-#ifdef __APPLE__
- size_t valsz = sizeof(uint32_t);
- int ret;
-
- ret = sysctlbyname("hw.activecpu", &_dispatch_hw_config.cc_max_active,
- &valsz, NULL, 0);
- (void)dispatch_assume_zero(ret);
- dispatch_assume(valsz == sizeof(uint32_t));
-
- ret = sysctlbyname("hw.logicalcpu_max",
- &_dispatch_hw_config.cc_max_logical, &valsz, NULL, 0);
- (void)dispatch_assume_zero(ret);
- dispatch_assume(valsz == sizeof(uint32_t));
-
- ret = sysctlbyname("hw.physicalcpu_max",
- &_dispatch_hw_config.cc_max_physical, &valsz, NULL, 0);
- (void)dispatch_assume_zero(ret);
- dispatch_assume(valsz == sizeof(uint32_t));
-#elif defined(__FreeBSD__)
- size_t valsz = sizeof(uint32_t);
- int ret;
-
- ret = sysctlbyname("kern.smp.cpus", &_dispatch_hw_config.cc_max_active,
- &valsz, NULL, 0);
- (void)dispatch_assume_zero(ret);
- (void)dispatch_assume(valsz == sizeof(uint32_t));
-
- _dispatch_hw_config.cc_max_logical =
- _dispatch_hw_config.cc_max_physical =
- _dispatch_hw_config.cc_max_active;
-#elif HAVE_SYSCONF && defined(_SC_NPROCESSORS_ONLN)
- int ret;
-
- ret = (int)sysconf(_SC_NPROCESSORS_ONLN);
-
- _dispatch_hw_config.cc_max_logical =
- _dispatch_hw_config.cc_max_physical =
- _dispatch_hw_config.cc_max_active = (ret < 0) ? 1 : ret;
-#else
-#warning "_dispatch_queue_set_width_init: no supported way to query CPU count"
- _dispatch_hw_config.cc_max_logical =
- _dispatch_hw_config.cc_max_physical =
- _dispatch_hw_config.cc_max_active = 1;
-#endif
+ _dispatch_hw_config.cc_max_active = _dispatch_get_activecpu();
+ _dispatch_hw_config.cc_max_logical = _dispatch_get_logicalcpu_max();
+ _dispatch_hw_config.cc_max_physical = _dispatch_get_physicalcpu_max();
}
-void
-dispatch_queue_set_width(dispatch_queue_t dq, long width)
+static inline bool
+_dispatch_root_queues_init_workq(void)
{
- int w = (int)width; // intentional truncation
- uint32_t tmp;
+ bool result = false;
+#if HAVE_PTHREAD_WORKQUEUES
+#if DISPATCH_ENABLE_THREAD_POOL
+ if (slowpath(getenv("LIBDISPATCH_DISABLE_KWQ"))) return result;
+#endif
+ int i, r;
+ pthread_workqueue_attr_t pwq_attr;
+ r = pthread_workqueue_attr_init_np(&pwq_attr);
+ (void)dispatch_assume_zero(r);
+ for (i = 0; i < DISPATCH_ROOT_QUEUE_COUNT; i++) {
+ pthread_workqueue_t pwq = NULL;
+ const int prio = _dispatch_root_queue_wq_priorities[i];
- if (slowpath(dq->do_ref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT)) {
+ r = pthread_workqueue_attr_setqueuepriority_np(&pwq_attr, prio);
+ (void)dispatch_assume_zero(r);
+ r = pthread_workqueue_attr_setovercommit_np(&pwq_attr, i & 1);
+ (void)dispatch_assume_zero(r);
+ r = pthread_workqueue_create_np(&pwq, &pwq_attr);
+ (void)dispatch_assume_zero(r);
+ result = result || dispatch_assume(pwq);
+ _dispatch_root_queue_contexts[i].dgq_kworkqueue = pwq;
+ }
+ r = pthread_workqueue_attr_destroy_np(&pwq_attr);
+ (void)dispatch_assume_zero(r);
+#endif // HAVE_PTHREAD_WORKQUEUES
+ return result;
+}
+
+static inline void
+_dispatch_root_queues_init_thread_pool(void)
+{
+#if DISPATCH_ENABLE_THREAD_POOL
+ int i;
+ for (i = 0; i < DISPATCH_ROOT_QUEUE_COUNT; i++) {
+#if TARGET_OS_EMBEDDED
+ // some software hangs if the non-overcommitting queues do not
+ // overcommit when threads block. Someday, this behavior should apply
+ // to all platforms
+ if (!(i & 1)) {
+ _dispatch_root_queue_contexts[i].dgq_thread_pool_size =
+ _dispatch_hw_config.cc_max_active;
+ }
+#endif
+#if USE_MACH_SEM
+ // override the default FIFO behavior for the pool semaphores
+ kern_return_t kr = semaphore_create(mach_task_self(),
+ &_dispatch_thread_mediator[i].dsema_port, SYNC_POLICY_LIFO, 0);
+ DISPATCH_VERIFY_MIG(kr);
+ (void)dispatch_assume_zero(kr);
+ (void)dispatch_assume(_dispatch_thread_mediator[i].dsema_port);
+#elif USE_POSIX_SEM
+ /* XXXRW: POSIX semaphores don't support LIFO? */
+ int ret = sem_init(&_dispatch_thread_mediator[i].dsema_sem, 0, 0);
+ (void)dispatch_assume_zero(ret);
+#endif
+ }
+#else
+ DISPATCH_CRASH("Thread pool creation failed");
+#endif // DISPATCH_ENABLE_THREAD_POOL
+}
+
+static void
+_dispatch_root_queues_init(void *context DISPATCH_UNUSED)
+{
+ if (!_dispatch_root_queues_init_workq()) {
+ _dispatch_root_queues_init_thread_pool();
+ }
+
+}
+
+#define countof(x) (sizeof(x) / sizeof(x[0]))
+
+DISPATCH_EXPORT DISPATCH_NOTHROW
+void
+libdispatch_init(void)
+{
+ dispatch_assert(DISPATCH_QUEUE_PRIORITY_COUNT == 4);
+ dispatch_assert(DISPATCH_ROOT_QUEUE_COUNT == 8);
+
+ dispatch_assert(DISPATCH_QUEUE_PRIORITY_LOW ==
+ -DISPATCH_QUEUE_PRIORITY_HIGH);
+ dispatch_assert(countof(_dispatch_root_queues) ==
+ DISPATCH_ROOT_QUEUE_COUNT);
+ dispatch_assert(countof(_dispatch_root_queue_contexts) ==
+ DISPATCH_ROOT_QUEUE_COUNT);
+#if HAVE_PTHREAD_WORKQUEUES
+ dispatch_assert(countof(_dispatch_root_queue_wq_priorities) ==
+ DISPATCH_ROOT_QUEUE_COUNT);
+#endif
+#if DISPATCH_ENABLE_THREAD_POOL
+ dispatch_assert(countof(_dispatch_thread_mediator) ==
+ DISPATCH_ROOT_QUEUE_COUNT);
+#endif
+ dispatch_assert(sizeof(struct dispatch_source_s) ==
+ sizeof(struct dispatch_queue_s) - DISPATCH_QUEUE_CACHELINE_PAD);
+#if DISPATCH_DEBUG
+ dispatch_assert(sizeof(struct dispatch_queue_s) % DISPATCH_CACHELINE_SIZE
+ == 0);
+#endif
+
+ _dispatch_thread_key_create(&dispatch_queue_key, _dispatch_queue_cleanup);
+ _dispatch_thread_key_create(&dispatch_sema4_key,
+ (void (*)(void *))_dispatch_thread_semaphore_dispose);
+ _dispatch_thread_key_create(&dispatch_cache_key, _dispatch_cache_cleanup);
+ _dispatch_thread_key_create(&dispatch_io_key, NULL);
+ _dispatch_thread_key_create(&dispatch_apply_key, NULL);
+#if DISPATCH_PERF_MON
+ _dispatch_thread_key_create(&dispatch_bcounter_key, NULL);
+#endif
+
+#if DISPATCH_USE_RESOLVERS // rdar://problem/8541707
+ _dispatch_main_q.do_vtable = &_dispatch_queue_vtable;
+ _dispatch_main_q.do_targetq = &_dispatch_root_queues[
+ DISPATCH_ROOT_QUEUE_IDX_DEFAULT_OVERCOMMIT_PRIORITY];
+ _dispatch_data_empty.do_vtable = &_dispatch_data_vtable;
+#endif
+
+ _dispatch_thread_setspecific(dispatch_queue_key, &_dispatch_main_q);
+
+#if DISPATCH_USE_PTHREAD_ATFORK
+ (void)dispatch_assume_zero(pthread_atfork(dispatch_atfork_prepare,
+ dispatch_atfork_parent, dispatch_atfork_child));
+#endif
+
+ _dispatch_hw_config_init();
+}
+
+DISPATCH_EXPORT DISPATCH_NOTHROW
+void
+dispatch_atfork_child(void)
+{
+ void *crash = (void *)0x100;
+ size_t i;
+
+ if (_dispatch_safe_fork) {
return;
}
+
+ _dispatch_main_q.dq_items_head = crash;
+ _dispatch_main_q.dq_items_tail = crash;
+
+ _dispatch_mgr_q.dq_items_head = crash;
+ _dispatch_mgr_q.dq_items_tail = crash;
+
+ for (i = 0; i < DISPATCH_ROOT_QUEUE_COUNT; i++) {
+ _dispatch_root_queues[i].dq_items_head = crash;
+ _dispatch_root_queues[i].dq_items_tail = crash;
+ }
+}
+
+#pragma mark -
+#pragma mark dispatch_queue_t
+
+// skip zero
+// 1 - main_q
+// 2 - mgr_q
+// 3 - _unused_
+// 4,5,6,7,8,9,10,11 - global queues
+// we use 'xadd' on Intel, so the initial value == next assigned
+unsigned long _dispatch_queue_serial_numbers = 12;
+
+dispatch_queue_t
+dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
+{
+ dispatch_queue_t dq;
+ size_t label_len;
+
+ if (!label) {
+ label = "";
+ }
+
+ label_len = strlen(label);
+ if (label_len < (DISPATCH_QUEUE_MIN_LABEL_SIZE - 1)) {
+ label_len = (DISPATCH_QUEUE_MIN_LABEL_SIZE - 1);
+ }
+
+ // XXX switch to malloc()
+ dq = calloc(1ul, sizeof(struct dispatch_queue_s) -
+ DISPATCH_QUEUE_MIN_LABEL_SIZE - DISPATCH_QUEUE_CACHELINE_PAD +
+ label_len + 1);
+ if (slowpath(!dq)) {
+ return dq;
+ }
+
+ _dispatch_queue_init(dq);
+ strcpy(dq->dq_label, label);
+
+ if (fastpath(!attr)) {
+ return dq;
+ }
+ if (fastpath(attr == DISPATCH_QUEUE_CONCURRENT)) {
+ dq->dq_width = UINT32_MAX;
+ dq->do_targetq = _dispatch_get_root_queue(0, false);
+ } else {
+ dispatch_debug_assert(!attr, "Invalid attribute");
+ }
+ return dq;
+}
+
+// 6618342 Contact the team that owns the Instrument DTrace probe before
+// renaming this symbol
+void
+_dispatch_queue_dispose(dispatch_queue_t dq)
+{
+ if (slowpath(dq == _dispatch_queue_get_current())) {
+ DISPATCH_CRASH("Release of a queue by itself");
+ }
+ if (slowpath(dq->dq_items_tail)) {
+ DISPATCH_CRASH("Release of a queue while items are enqueued");
+ }
+
+ // trash the tail queue so that use after free will crash
+ dq->dq_items_tail = (void *)0x200;
+
+ dispatch_queue_t dqsq = dispatch_atomic_xchg2o(dq, dq_specific_q,
+ (void *)0x200);
+ if (dqsq) {
+ _dispatch_release(dqsq);
+ }
+
+ _dispatch_dispose(dq);
+}
+
+const char *
+dispatch_queue_get_label(dispatch_queue_t dq)
+{
+ return dq->dq_label;
+}
+
+static void
+_dispatch_queue_set_width2(void *ctxt)
+{
+ int w = (int)(intptr_t)ctxt; // intentional truncation
+ uint32_t tmp;
+ dispatch_queue_t dq = _dispatch_queue_get_current();
+
if (w == 1 || w == 0) {
dq->dq_width = 1;
return;
@@ -519,831 +628,28 @@
tmp = _dispatch_hw_config.cc_max_logical;
break;
}
- // multiply by two since the running count is inc/dec by two (the low bit == barrier)
+ // multiply by two since the running count is inc/dec by two
+ // (the low bit == barrier)
dq->dq_width = tmp * 2;
-
- // XXX if the queue has items and the width is increased, we should try to wake the queue
}
-// skip zero
-// 1 - main_q
-// 2 - mgr_q
-// 3 - _unused_
-// 4,5,6,7,8,9 - global queues
-// we use 'xadd' on Intel, so the initial value == next assigned
-static unsigned long _dispatch_queue_serial_numbers = 10;
-
-// Note to later developers: ensure that any initialization changes are
-// made for statically allocated queues (i.e. _dispatch_main_q).
-inline void
-_dispatch_queue_init(dispatch_queue_t dq)
-{
- dq->do_vtable = &_dispatch_queue_vtable;
- dq->do_next = DISPATCH_OBJECT_LISTLESS;
- dq->do_ref_cnt = 1;
- dq->do_xref_cnt = 1;
- dq->do_targetq = _dispatch_get_root_queue(0, true);
- dq->dq_running = 0;
- dq->dq_width = 1;
- dq->dq_serialnum = dispatch_atomic_inc(&_dispatch_queue_serial_numbers) - 1;
-}
-
-dispatch_queue_t
-dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
-{
- dispatch_queue_t dq;
- size_t label_len;
-
- if (!label) {
- label = "";
- }
-
- label_len = strlen(label);
- if (label_len < (DISPATCH_QUEUE_MIN_LABEL_SIZE - 1)) {
- label_len = (DISPATCH_QUEUE_MIN_LABEL_SIZE - 1);
- }
-
- // XXX switch to malloc()
- dq = calloc(1ul, sizeof(struct dispatch_queue_s) - DISPATCH_QUEUE_MIN_LABEL_SIZE + label_len + 1);
- if (slowpath(!dq)) {
- return dq;
- }
-
- _dispatch_queue_init(dq);
- strcpy(dq->dq_label, label);
-
-#ifndef DISPATCH_NO_LEGACY
- if (slowpath(attr)) {
- dq->do_targetq = _dispatch_get_root_queue(attr->qa_priority, attr->qa_flags & DISPATCH_QUEUE_OVERCOMMIT);
- dq->dq_finalizer_ctxt = attr->finalizer_ctxt;
- dq->dq_finalizer_func = attr->finalizer_func;
-#ifdef __BLOCKS__
- if (attr->finalizer_func == (void*)_dispatch_call_block_and_release2) {
- // if finalizer_ctxt is a Block, retain it.
- dq->dq_finalizer_ctxt = Block_copy(dq->dq_finalizer_ctxt);
- if (!(dq->dq_finalizer_ctxt)) {
- goto out_bad;
- }
- }
-#endif
- }
-#else
- (void)attr;
-#endif
-
- return dq;
-
-#if !defined(DISPATCH_NO_LEGACY) && defined(__BLOCKS__)
-out_bad:
-#endif
- free(dq);
- return NULL;
-}
-
-// 6618342 Contact the team that owns the Instrument DTrace probe before renaming this symbol
void
-_dispatch_queue_dispose(dispatch_queue_t dq)
+dispatch_queue_set_width(dispatch_queue_t dq, long width)
{
- if (slowpath(dq == _dispatch_queue_get_current())) {
- DISPATCH_CRASH("Release of a queue by itself");
- }
- if (slowpath(dq->dq_items_tail)) {
- DISPATCH_CRASH("Release of a queue while items are enqueued");
- }
-
-#ifndef DISPATCH_NO_LEGACY
- if (dq->dq_finalizer_func) {
- dq->dq_finalizer_func(dq->dq_finalizer_ctxt, dq);
- }
-#endif
-
- // trash the tail queue so that use after free will crash
- dq->dq_items_tail = (void *)0x200;
-
- _dispatch_dispose(dq);
-}
-
-DISPATCH_NOINLINE
-void
-_dispatch_queue_push_list_slow(dispatch_queue_t dq, struct dispatch_object_s *obj)
-{
- // The queue must be retained before dq_items_head is written in order
- // to ensure that the reference is still valid when _dispatch_wakeup is
- // called. Otherwise, if preempted between the assignment to
- // dq_items_head and _dispatch_wakeup, the blocks submitted to the
- // queue may release the last reference to the queue when invoked by
- // _dispatch_queue_drain. <rdar://problem/6932776>
- _dispatch_retain(dq);
- dq->dq_items_head = obj;
- _dispatch_wakeup(dq);
- _dispatch_release(dq);
-}
-
-DISPATCH_NOINLINE
-static void
-_dispatch_barrier_async_f_slow(dispatch_queue_t dq, void *context, dispatch_function_t func)
-{
- dispatch_continuation_t dc = fastpath(_dispatch_continuation_alloc_from_heap());
-
- dc->do_vtable = (void *)(DISPATCH_OBJ_ASYNC_BIT | DISPATCH_OBJ_BARRIER_BIT);
- dc->dc_func = func;
- dc->dc_ctxt = context;
-
- _dispatch_queue_push(dq, dc);
-}
-
-#ifdef __BLOCKS__
-void
-dispatch_barrier_async(dispatch_queue_t dq, void (^work)(void))
-{
- dispatch_barrier_async_f(dq, _dispatch_Block_copy(work), _dispatch_call_block_and_release);
-}
-#endif
-
-DISPATCH_NOINLINE
-void
-dispatch_barrier_async_f(dispatch_queue_t dq, void *context, dispatch_function_t func)
-{
- dispatch_continuation_t dc = fastpath(_dispatch_continuation_alloc_cacheonly());
-
- if (!dc) {
- return _dispatch_barrier_async_f_slow(dq, context, func);
- }
-
- dc->do_vtable = (void *)(DISPATCH_OBJ_ASYNC_BIT | DISPATCH_OBJ_BARRIER_BIT);
- dc->dc_func = func;
- dc->dc_ctxt = context;
-
- _dispatch_queue_push(dq, dc);
-}
-
-DISPATCH_NOINLINE
-static void
-_dispatch_async_f_slow(dispatch_queue_t dq, void *context, dispatch_function_t func)
-{
- dispatch_continuation_t dc = fastpath(_dispatch_continuation_alloc_from_heap());
-
- dc->do_vtable = (void *)DISPATCH_OBJ_ASYNC_BIT;
- dc->dc_func = func;
- dc->dc_ctxt = context;
-
- _dispatch_queue_push(dq, dc);
-}
-
-#ifdef __BLOCKS__
-void
-dispatch_async(dispatch_queue_t dq, void (^work)(void))
-{
- dispatch_async_f(dq, _dispatch_Block_copy(work), _dispatch_call_block_and_release);
-}
-#endif
-
-DISPATCH_NOINLINE
-void
-dispatch_async_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func)
-{
- dispatch_continuation_t dc = fastpath(_dispatch_continuation_alloc_cacheonly());
-
- // unlike dispatch_sync_f(), we do NOT need to check the queue width,
- // the "drain" function will do this test
-
- if (!dc) {
- return _dispatch_async_f_slow(dq, ctxt, func);
- }
-
- dc->do_vtable = (void *)DISPATCH_OBJ_ASYNC_BIT;
- dc->dc_func = func;
- dc->dc_ctxt = ctxt;
-
- _dispatch_queue_push(dq, dc);
-}
-
-struct dispatch_barrier_sync_slow2_s {
- dispatch_queue_t dbss2_dq;
- dispatch_function_t dbss2_func;
- dispatch_function_t dbss2_ctxt;
- dispatch_semaphore_t dbss2_sema;
-};
-
-static void
-_dispatch_barrier_sync_f_slow_invoke(void *ctxt)
-{
- struct dispatch_barrier_sync_slow2_s *dbss2 = ctxt;
-
- dispatch_assert(dbss2->dbss2_dq == dispatch_get_current_queue());
- // ALL blocks on the main queue, must be run on the main thread
- if (dbss2->dbss2_dq == dispatch_get_main_queue()) {
- dbss2->dbss2_func(dbss2->dbss2_ctxt);
- } else {
- dispatch_suspend(dbss2->dbss2_dq);
- }
- dispatch_semaphore_signal(dbss2->dbss2_sema);
-}
-
-DISPATCH_NOINLINE
-static void
-_dispatch_barrier_sync_f_slow(dispatch_queue_t dq, void *ctxt, dispatch_function_t func)
-{
-
- // It's preferred to execute synchronous blocks on the current thread
- // due to thread-local side effects, garbage collection, etc. However,
- // blocks submitted to the main thread MUST be run on the main thread
-
- struct dispatch_barrier_sync_slow2_s dbss2 = {
- .dbss2_dq = dq,
- .dbss2_func = func,
- .dbss2_ctxt = ctxt,
- .dbss2_sema = _dispatch_get_thread_semaphore(),
- };
- struct dispatch_barrier_sync_slow_s {
- DISPATCH_CONTINUATION_HEADER(dispatch_barrier_sync_slow_s);
- } dbss = {
- .do_vtable = (void *)DISPATCH_OBJ_BARRIER_BIT,
- .dc_func = _dispatch_barrier_sync_f_slow_invoke,
- .dc_ctxt = &dbss2,
- };
-
- dispatch_queue_t old_dq = _dispatch_thread_getspecific(dispatch_queue_key);
- _dispatch_queue_push(dq, (void *)&dbss);
- dispatch_semaphore_wait(dbss2.dbss2_sema, DISPATCH_TIME_FOREVER);
-
- if (dq != dispatch_get_main_queue()) {
- _dispatch_thread_setspecific(dispatch_queue_key, dq);
- func(ctxt);
- _dispatch_workitem_inc();
- _dispatch_thread_setspecific(dispatch_queue_key, old_dq);
- dispatch_resume(dq);
- }
- _dispatch_put_thread_semaphore(dbss2.dbss2_sema);
-}
-
-#ifdef __BLOCKS__
-void
-dispatch_barrier_sync(dispatch_queue_t dq, void (^work)(void))
-{
- // Blocks submitted to the main queue MUST be run on the main thread,
- // therefore we must Block_copy in order to notify the thread-local
- // garbage collector that the objects are transferring to the main thread
- if (dq == dispatch_get_main_queue()) {
- dispatch_block_t block = Block_copy(work);
- return dispatch_barrier_sync_f(dq, block, _dispatch_call_block_and_release);
- }
- struct Block_basic *bb = (void *)work;
-
- dispatch_barrier_sync_f(dq, work, (dispatch_function_t)bb->Block_invoke);
-}
-#endif
-
-DISPATCH_NOINLINE
-void
-dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func)
-{
- dispatch_queue_t old_dq = _dispatch_thread_getspecific(dispatch_queue_key);
-
- // 1) ensure that this thread hasn't enqueued anything ahead of this call
- // 2) the queue is not suspended
- // 3) the queue is not weird
- if (slowpath(dq->dq_items_tail)
- || slowpath(DISPATCH_OBJECT_SUSPENDED(dq))
- || slowpath(!_dispatch_queue_trylock(dq))) {
- return _dispatch_barrier_sync_f_slow(dq, ctxt, func);
- }
-
- _dispatch_thread_setspecific(dispatch_queue_key, dq);
- func(ctxt);
- _dispatch_workitem_inc();
- _dispatch_thread_setspecific(dispatch_queue_key, old_dq);
- _dispatch_queue_unlock(dq);
-}
-
-static void
-_dispatch_sync_f_slow2(void *ctxt)
-{
- dispatch_queue_t dq = _dispatch_queue_get_current();
- dispatch_atomic_add(&dq->dq_running, 2);
- dispatch_semaphore_signal(ctxt);
-}
-
-DISPATCH_NOINLINE
-static void
-_dispatch_sync_f_slow(dispatch_queue_t dq)
-{
- // the global root queues do not need strict ordering
- if (dq->do_targetq == NULL) {
- dispatch_atomic_add(&dq->dq_running, 2);
+ if (slowpath(dq->do_ref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT)) {
return;
}
-
- struct dispatch_sync_slow_s {
- DISPATCH_CONTINUATION_HEADER(dispatch_sync_slow_s);
- } dss = {
- .do_vtable = NULL,
- .dc_func = _dispatch_sync_f_slow2,
- .dc_ctxt = _dispatch_get_thread_semaphore(),
- };
-
- // XXX FIXME -- concurrent queues can be come serial again
- _dispatch_queue_push(dq, (void *)&dss);
-
- dispatch_semaphore_wait(dss.dc_ctxt, DISPATCH_TIME_FOREVER);
- _dispatch_put_thread_semaphore(dss.dc_ctxt);
+ dispatch_barrier_async_f(dq, (void*)(intptr_t)width,
+ _dispatch_queue_set_width2);
}
-#ifdef __BLOCKS__
-void
-dispatch_sync(dispatch_queue_t dq, void (^work)(void))
-{
- struct Block_basic *bb = (void *)work;
- dispatch_sync_f(dq, work, (dispatch_function_t)bb->Block_invoke);
-}
-#endif
-
-DISPATCH_NOINLINE
-void
-dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func)
-{
- typeof(dq->dq_running) prev_cnt;
- dispatch_queue_t old_dq;
-
- if (dq->dq_width == 1) {
- return dispatch_barrier_sync_f(dq, ctxt, func);
- }
-
- // 1) ensure that this thread hasn't enqueued anything ahead of this call
- // 2) the queue is not suspended
- if (slowpath(dq->dq_items_tail) || slowpath(DISPATCH_OBJECT_SUSPENDED(dq))) {
- _dispatch_sync_f_slow(dq);
- } else {
- prev_cnt = dispatch_atomic_add(&dq->dq_running, 2) - 2;
-
- if (slowpath(prev_cnt & 1)) {
- if (dispatch_atomic_sub(&dq->dq_running, 2) == 0) {
- _dispatch_wakeup(dq);
- }
- _dispatch_sync_f_slow(dq);
- }
- }
-
- old_dq = _dispatch_thread_getspecific(dispatch_queue_key);
- _dispatch_thread_setspecific(dispatch_queue_key, dq);
- func(ctxt);
- _dispatch_workitem_inc();
- _dispatch_thread_setspecific(dispatch_queue_key, old_dq);
-
- if (slowpath(dispatch_atomic_sub(&dq->dq_running, 2) == 0)) {
- _dispatch_wakeup(dq);
- }
-}
-
-const char *
-dispatch_queue_get_label(dispatch_queue_t dq)
-{
- return dq->dq_label;
-}
-
-#if DISPATCH_COCOA_COMPAT
-static void
-_dispatch_main_q_port_init(void *ctxt __attribute__((unused)))
-{
- kern_return_t kr;
-
- kr = mach_port_allocate(mach_task_self(), MACH_PORT_RIGHT_RECEIVE, &main_q_port);
- DISPATCH_VERIFY_MIG(kr);
- (void)dispatch_assume_zero(kr);
- kr = mach_port_insert_right(mach_task_self(), main_q_port, main_q_port, MACH_MSG_TYPE_MAKE_SEND);
- DISPATCH_VERIFY_MIG(kr);
- (void)dispatch_assume_zero(kr);
-
- _dispatch_program_is_probably_callback_driven = true;
- _dispatch_safe_fork = false;
-}
-
-// 6618342 Contact the team that owns the Instrument DTrace probe before renaming this symbol
-DISPATCH_NOINLINE
-static void
-_dispatch_queue_set_mainq_drain_state(bool arg)
-{
- main_q_is_draining = arg;
-}
-#endif
-
-/*
- * XXXRW: Work-around for possible clang bug in which __builtin_trap() is not
- * marked noreturn, leading to a build error as dispatch_main() *is* marked
- * noreturn. Mask by marking __builtin_trap() as noreturn locally.
- */
-#ifndef HAVE_NORETURN_BUILTIN_TRAP
-void __builtin_trap(void) __attribute__((__noreturn__));
-#endif
-
-void
-dispatch_main(void)
-{
-
-#if HAVE_PTHREAD_MAIN_NP
- if (pthread_main_np()) {
-#endif
- _dispatch_program_is_probably_callback_driven = true;
-#if defined(__linux__)
- // Workaround for a GNU/Linux bug that causes the process to become a
- // zombie when the main thread calls pthread_exit().
- void *p = _dispatch_thread_getspecific(dispatch_sema4_key);
- if (p) dispatch_release(p);
- _dispatch_force_cache_cleanup();
- _dispatch_queue_cleanup2();
-#endif // defined(__linux__)
- pthread_exit(NULL);
- DISPATCH_CRASH("pthread_exit() returned");
-#if HAVE_PTHREAD_MAIN_NP
- }
- DISPATCH_CLIENT_CRASH("dispatch_main() must be called on the main thread");
-#endif
-}
-
-static void
-_dispatch_sigsuspend(void *ctxt __attribute__((unused)))
-{
- static const sigset_t mask;
-
- for (;;) {
- sigsuspend(&mask);
- }
-}
-
-DISPATCH_NOINLINE
-static void
-_dispatch_queue_cleanup2(void)
-{
- dispatch_atomic_dec(&_dispatch_main_q.dq_running);
-
- if (dispatch_atomic_sub(&_dispatch_main_q.do_suspend_cnt, DISPATCH_OBJECT_SUSPEND_LOCK) == 0) {
- _dispatch_wakeup(&_dispatch_main_q);
- }
-
- // overload the "probably" variable to mean that dispatch_main() or
- // similar non-POSIX API was called
- // this has to run before the DISPATCH_COCOA_COMPAT below
- if (_dispatch_program_is_probably_callback_driven) {
-#if defined(__linux__)
- // Use the main thread as the signal handler thread on GNU/Linux
- _dispatch_sigsuspend(NULL);
-#else
- dispatch_async_f(_dispatch_get_root_queue(0, 0), NULL, _dispatch_sigsuspend);
- sleep(1); // workaround 6778970
-#endif // defined(__linux__)
- }
-
-#if DISPATCH_COCOA_COMPAT
- dispatch_once_f(&_dispatch_main_q_port_pred, NULL, _dispatch_main_q_port_init);
-
- mach_port_t mp = main_q_port;
- kern_return_t kr;
-
- main_q_port = 0;
-
- if (mp) {
- kr = mach_port_deallocate(mach_task_self(), mp);
- DISPATCH_VERIFY_MIG(kr);
- (void)dispatch_assume_zero(kr);
- kr = mach_port_mod_refs(mach_task_self(), mp, MACH_PORT_RIGHT_RECEIVE, -1);
- DISPATCH_VERIFY_MIG(kr);
- (void)dispatch_assume_zero(kr);
- }
-#endif
-}
-
-#ifndef DISPATCH_NO_LEGACY
-dispatch_queue_t
-dispatch_get_concurrent_queue(long pri)
-{
- if (pri > 0) {
- pri = DISPATCH_QUEUE_PRIORITY_HIGH;
- } else if (pri < 0) {
- pri = DISPATCH_QUEUE_PRIORITY_LOW;
- }
- return _dispatch_get_root_queue(pri, false);
-}
-#endif
-
-static void
-_dispatch_queue_cleanup(void *ctxt)
-{
- if (ctxt == &_dispatch_main_q) {
- return _dispatch_queue_cleanup2();
- }
- // POSIX defines that destructors are only called if 'ctxt' is non-null
- DISPATCH_CRASH("Premature thread exit while a dispatch queue is running");
-}
-
-dispatch_queue_t
-dispatch_get_global_queue(long priority, unsigned long flags)
-{
- if (flags & ~DISPATCH_QUEUE_OVERCOMMIT) {
- return NULL;
- }
- return _dispatch_get_root_queue(priority, flags & DISPATCH_QUEUE_OVERCOMMIT);
-}
-
-#define countof(x) (sizeof(x) / sizeof(x[0]))
-void
-libdispatch_init(void)
-{
- dispatch_assert(DISPATCH_QUEUE_PRIORITY_COUNT == 3);
- dispatch_assert(DISPATCH_ROOT_QUEUE_COUNT == 6);
-
- dispatch_assert(DISPATCH_QUEUE_PRIORITY_LOW == -DISPATCH_QUEUE_PRIORITY_HIGH);
- dispatch_assert(countof(_dispatch_root_queues) == DISPATCH_ROOT_QUEUE_COUNT);
- dispatch_assert(countof(_dispatch_thread_mediator) == DISPATCH_ROOT_QUEUE_COUNT);
- dispatch_assert(countof(_dispatch_root_queue_contexts) == DISPATCH_ROOT_QUEUE_COUNT);
-
-#if HAVE_PTHREAD_KEY_INIT_NP
- _dispatch_thread_key_init_np(dispatch_queue_key, _dispatch_queue_cleanup);
- _dispatch_thread_key_init_np(dispatch_sema4_key, (void (*)(void *))dispatch_release); // use the extern release
- _dispatch_thread_key_init_np(dispatch_cache_key, _dispatch_cache_cleanup2);
-#if DISPATCH_PERF_MON
- _dispatch_thread_key_init_np(dispatch_bcounter_key, NULL);
-#endif
-#else /* !HAVE_PTHREAD_KEY_INIT_NP */
- _dispatch_thread_key_create(&dispatch_queue_key,
- _dispatch_queue_cleanup);
- _dispatch_thread_key_create(&dispatch_sema4_key,
- (void (*)(void *))dispatch_release); // use the extern release
- _dispatch_thread_key_create(&dispatch_cache_key,
- _dispatch_cache_cleanup2);
-#ifdef DISPATCH_PERF_MON
- _dispatch_thread_key_create(&dispatch_bcounter_key, NULL);
-#endif
-#endif /* HAVE_PTHREAD_KEY_INIT_NP */
-
- _dispatch_thread_setspecific(dispatch_queue_key, &_dispatch_main_q);
-
- _dispatch_queue_set_width_init();
-}
-
-void
-_dispatch_queue_unlock(dispatch_queue_t dq)
-{
- if (slowpath(dispatch_atomic_dec(&dq->dq_running))) {
- return;
- }
-
- _dispatch_wakeup(dq);
-}
-
-// 6618342 Contact the team that owns the Instrument DTrace probe before renaming this symbol
-dispatch_queue_t
-_dispatch_wakeup(dispatch_object_t dou)
-{
- dispatch_queue_t tq;
-
- if (slowpath(DISPATCH_OBJECT_SUSPENDED(dou._do))) {
- return NULL;
- }
- if (!dx_probe(dou._do) && !dou._dq->dq_items_tail) {
- return NULL;
- }
-
- if (!_dispatch_trylock(dou._do)) {
-#if DISPATCH_COCOA_COMPAT
- if (dou._dq == &_dispatch_main_q) {
- _dispatch_queue_wakeup_main();
- }
-#endif
- return NULL;
- }
- _dispatch_retain(dou._do);
- tq = dou._do->do_targetq;
- _dispatch_queue_push(tq, dou._do);
- return tq; // libdispatch doesn't need this, but the Instrument DTrace probe does
-}
-
-#if DISPATCH_COCOA_COMPAT
-DISPATCH_NOINLINE
-void
-_dispatch_queue_wakeup_main(void)
-{
- kern_return_t kr;
-
- dispatch_once_f(&_dispatch_main_q_port_pred, NULL, _dispatch_main_q_port_init);
-
- kr = _dispatch_send_wakeup_main_thread(main_q_port, 0);
-
- switch (kr) {
- case MACH_SEND_TIMEOUT:
- case MACH_SEND_TIMED_OUT:
- case MACH_SEND_INVALID_DEST:
- break;
- default:
- (void)dispatch_assume_zero(kr);
- break;
- }
-
- _dispatch_safe_fork = false;
-}
-#endif
-
-#if HAVE_PTHREAD_WORKQUEUES
-static inline int
-_dispatch_rootq2wq_pri(long idx)
-{
-#ifdef WORKQ_DEFAULT_PRIOQUEUE
- switch (idx) {
- case 0:
- case 1:
- return WORKQ_LOW_PRIOQUEUE;
- case 2:
- case 3:
- default:
- return WORKQ_DEFAULT_PRIOQUEUE;
- case 4:
- case 5:
- return WORKQ_HIGH_PRIOQUEUE;
- }
-#else
- return pri;
-#endif
-}
-#endif
-
-static void
-_dispatch_root_queues_init(void *context __attribute__((unused)))
-{
-#if HAVE_PTHREAD_WORKQUEUES
- bool disable_wq = getenv("LIBDISPATCH_DISABLE_KWQ");
- pthread_workqueue_attr_t pwq_attr;
- int r;
-#endif
-#if USE_MACH_SEM
- kern_return_t kr;
-#endif
-#if USE_POSIX_SEM
- int ret;
-#endif
- int i;
-
-#if HAVE_PTHREAD_WORKQUEUES
- r = pthread_workqueue_attr_init_np(&pwq_attr);
- (void)dispatch_assume_zero(r);
-#endif
-
- for (i = 0; i < DISPATCH_ROOT_QUEUE_COUNT; i++) {
-// some software hangs if the non-overcommitting queues do not overcommit when threads block
-#if 0
- if (!(i & 1)) {
- dispatch_root_queue_contexts[i].dgq_thread_pool_size = _dispatch_hw_config.cc_max_active;
- }
-#endif
-#if HAVE_PTHREAD_WORKQUEUES
- r = pthread_workqueue_attr_setqueuepriority_np(&pwq_attr, _dispatch_rootq2wq_pri(i));
- (void)dispatch_assume_zero(r);
- r = pthread_workqueue_attr_setovercommit_np(&pwq_attr, i & 1);
- (void)dispatch_assume_zero(r);
- r = 0;
- if (disable_wq || (r = pthread_workqueue_create_np(&_dispatch_root_queue_contexts[i].dgq_kworkqueue, &pwq_attr))) {
- if (r != ENOTSUP) {
- (void)dispatch_assume_zero(r);
- }
-#endif /* HAVE_PTHREAD_WORKQUEUES */
-#if USE_MACH_SEM
- // override the default FIFO behavior for the pool semaphores
- kr = semaphore_create(mach_task_self(), &_dispatch_thread_mediator[i].dsema_port, SYNC_POLICY_LIFO, 0);
- DISPATCH_VERIFY_MIG(kr);
- (void)dispatch_assume_zero(kr);
- dispatch_assume(_dispatch_thread_mediator[i].dsema_port);
-#endif
-#if USE_POSIX_SEM
- /* XXXRW: POSIX semaphores don't support LIFO? */
- ret = sem_init(&_dispatch_thread_mediator[i].dsema_sem, 0, 0);
- (void)dispatch_assume_zero(ret);
-#endif
-#if USE_WIN32_SEM
- _dispatch_thread_mediator[i].dsema_handle = CreateSemaphore(NULL, 0, LONG_MAX, NULL);
- dispatch_assume(_dispatch_thread_mediator[i].dsema_handle);
-#endif
-#if HAVE_PTHREAD_WORKQUEUES
- } else {
- (void)dispatch_assume(_dispatch_root_queue_contexts[i].dgq_kworkqueue);
- }
-#endif
- }
-
-#if HAVE_PTHREAD_WORKQUEUES
- r = pthread_workqueue_attr_destroy_np(&pwq_attr);
- (void)dispatch_assume_zero(r);
-#endif
-}
-
-bool
-_dispatch_queue_wakeup_global(dispatch_queue_t dq)
-{
- static dispatch_once_t pred;
- struct dispatch_root_queue_context_s *qc = dq->do_ctxt;
-#if HAVE_PTHREAD_WORKQUEUES
- pthread_workitem_handle_t wh;
- unsigned int gen_cnt;
-#endif
- pthread_t pthr;
- int r, t_count;
-
- if (!dq->dq_items_tail) {
- return false;
- }
-
- _dispatch_safe_fork = false;
-
- dispatch_debug_queue(dq, __PRETTY_FUNCTION__);
-
- dispatch_once_f(&pred, NULL, _dispatch_root_queues_init);
-
-#if HAVE_PTHREAD_WORKQUEUES
- if (qc->dgq_kworkqueue) {
- if (dispatch_atomic_cmpxchg(&qc->dgq_pending, 0, 1)) {
- _dispatch_debug("requesting new worker thread");
-
- r = pthread_workqueue_additem_np(qc->dgq_kworkqueue, _dispatch_worker_thread2, dq, &wh, &gen_cnt);
- (void)dispatch_assume_zero(r);
- } else {
- _dispatch_debug("work thread request still pending on global queue: %p", dq);
- }
- goto out;
- }
-#endif
-
- if (dispatch_semaphore_signal(qc->dgq_thread_mediator)) {
- goto out;
- }
-
- do {
- t_count = qc->dgq_thread_pool_size;
- if (!t_count) {
- _dispatch_debug("The thread pool is full: %p", dq);
- goto out;
- }
- } while (!dispatch_atomic_cmpxchg(&qc->dgq_thread_pool_size, t_count, t_count - 1));
-
- while ((r = pthread_create(&pthr, NULL, _dispatch_worker_thread, dq))) {
- if (r != EAGAIN) {
- (void)dispatch_assume_zero(r);
- }
- sleep(1);
- }
- r = pthread_detach(pthr);
- (void)dispatch_assume_zero(r);
-
-out:
- return false;
-}
-
-void
-_dispatch_queue_serial_drain_till_empty(dispatch_queue_t dq)
-{
-#if DISPATCH_PERF_MON
- uint64_t start = _dispatch_absolute_time();
-#endif
- _dispatch_queue_drain(dq);
-#if DISPATCH_PERF_MON
- _dispatch_queue_merge_stats(start);
-#endif
- _dispatch_force_cache_cleanup();
-}
-
-// 6618342 Contact the team that owns the Instrument DTrace probe before renaming this symbol
-DISPATCH_NOINLINE
-void
-_dispatch_queue_invoke(dispatch_queue_t dq)
-{
- dispatch_queue_t tq = dq->do_targetq;
-
- if (!slowpath(DISPATCH_OBJECT_SUSPENDED(dq)) && fastpath(_dispatch_queue_trylock(dq))) {
- _dispatch_queue_drain(dq);
- if (tq == dq->do_targetq) {
- tq = dx_invoke(dq);
- } else {
- tq = dq->do_targetq;
- }
- // We do not need to check the result.
- // When the suspend-count lock is dropped, then the check will happen.
- dispatch_atomic_dec(&dq->dq_running);
- if (tq) {
- return _dispatch_queue_push(tq, dq);
- }
- }
-
- dq->do_next = DISPATCH_OBJECT_LISTLESS;
- if (dispatch_atomic_sub(&dq->do_suspend_cnt, DISPATCH_OBJECT_SUSPEND_LOCK) == 0) {
- if (dq->dq_running == 0) {
- _dispatch_wakeup(dq); // verify that the queue is idle
- }
- }
- _dispatch_release(dq); // added when the queue is put on the list
-}
-
-// 6618342 Contact the team that owns the Instrument DTrace probe before renaming this symbol
+// 6618342 Contact the team that owns the Instrument DTrace probe before
+// renaming this symbol
static void
_dispatch_set_target_queue2(void *ctxt)
{
dispatch_queue_t prev_dq, dq = _dispatch_queue_get_current();
-
+
prev_dq = dq->do_targetq;
dq->do_targetq = ctxt;
_dispatch_release(prev_dq);
@@ -1352,199 +658,289 @@
void
dispatch_set_target_queue(dispatch_object_t dou, dispatch_queue_t dq)
{
+ dispatch_queue_t prev_dq;
+ unsigned long type;
+
if (slowpath(dou._do->do_xref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT)) {
return;
}
- // NOTE: we test for NULL target queues internally to detect root queues
- // therefore, if the retain crashes due to a bad input, that is OK
- _dispatch_retain(dq);
- dispatch_barrier_async_f(dou._dq, dq, _dispatch_set_target_queue2);
-}
-
-static void
-_dispatch_async_f_redirect2(void *_ctxt)
-{
- struct dispatch_continuation_s *dc = _ctxt;
- struct dispatch_continuation_s *other_dc = dc->dc_data[1];
- dispatch_queue_t old_dq, dq = dc->dc_data[0];
-
- old_dq = _dispatch_thread_getspecific(dispatch_queue_key);
- _dispatch_thread_setspecific(dispatch_queue_key, dq);
- _dispatch_continuation_pop(other_dc);
- _dispatch_thread_setspecific(dispatch_queue_key, old_dq);
-
- if (dispatch_atomic_sub(&dq->dq_running, 2) == 0) {
- _dispatch_wakeup(dq);
+ type = dx_type(dou._do) & _DISPATCH_META_TYPE_MASK;
+ if (slowpath(!dq)) {
+ bool is_concurrent_q = (type == _DISPATCH_QUEUE_TYPE &&
+ slowpath(dou._dq->dq_width > 1));
+ dq = _dispatch_get_root_queue(0, !is_concurrent_q);
}
- _dispatch_release(dq);
-}
-
-static void
-_dispatch_async_f_redirect(dispatch_queue_t dq, struct dispatch_object_s *other_dc)
-{
- dispatch_continuation_t dc = (void *)other_dc;
- dispatch_queue_t root_dq = dq;
-
- if (dc->dc_func == _dispatch_sync_f_slow2) {
- return dc->dc_func(dc->dc_ctxt);
+ // TODO: put into the vtable
+ switch(type) {
+ case _DISPATCH_QUEUE_TYPE:
+ case _DISPATCH_SOURCE_TYPE:
+ _dispatch_retain(dq);
+ return dispatch_barrier_async_f(dou._dq, dq,
+ _dispatch_set_target_queue2);
+ case _DISPATCH_IO_TYPE:
+ return _dispatch_io_set_target_queue(dou._dchannel, dq);
+ default:
+ _dispatch_retain(dq);
+ dispatch_atomic_store_barrier();
+ prev_dq = dispatch_atomic_xchg2o(dou._do, do_targetq, dq);
+ if (prev_dq) _dispatch_release(prev_dq);
+ return;
}
-
- dispatch_atomic_add(&dq->dq_running, 2);
- _dispatch_retain(dq);
-
- dc = _dispatch_continuation_alloc_cacheonly() ?: _dispatch_continuation_alloc_from_heap();
-
- dc->do_vtable = (void *)DISPATCH_OBJ_ASYNC_BIT;
- dc->dc_func = _dispatch_async_f_redirect2;
- dc->dc_ctxt = dc;
- dc->dc_data[0] = dq;
- dc->dc_data[1] = other_dc;
-
- do {
- root_dq = root_dq->do_targetq;
- } while (root_dq->do_targetq);
-
- _dispatch_queue_push(root_dq, dc);
}
-
void
-_dispatch_queue_drain(dispatch_queue_t dq)
+dispatch_set_current_target_queue(dispatch_queue_t dq)
{
- dispatch_queue_t orig_tq, old_dq = _dispatch_thread_getspecific(dispatch_queue_key);
- struct dispatch_object_s *dc = NULL, *next_dc = NULL;
+ dispatch_queue_t queue = _dispatch_queue_get_current();
- orig_tq = dq->do_targetq;
+ if (slowpath(!queue)) {
+ DISPATCH_CLIENT_CRASH("SPI not called from a queue");
+ }
+ if (slowpath(queue->do_xref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT)) {
+ DISPATCH_CLIENT_CRASH("SPI not supported on this queue");
+ }
+ if (slowpath(queue->dq_width != 1)) {
+ DISPATCH_CLIENT_CRASH("SPI not called from a serial queue");
+ }
+ if (slowpath(!dq)) {
+ dq = _dispatch_get_root_queue(0, true);
+ }
+ _dispatch_retain(dq);
+ _dispatch_set_target_queue2(dq);
+}
- _dispatch_thread_setspecific(dispatch_queue_key, dq);
+#pragma mark -
+#pragma mark dispatch_queue_specific
- while (dq->dq_items_tail) {
- while (!fastpath(dq->dq_items_head)) {
- _dispatch_hardware_pause();
+struct dispatch_queue_specific_queue_s {
+ DISPATCH_STRUCT_HEADER(dispatch_queue_specific_queue_s,
+ dispatch_queue_specific_queue_vtable_s);
+ DISPATCH_QUEUE_HEADER;
+ union {
+ char _dqsq_pad[DISPATCH_QUEUE_MIN_LABEL_SIZE];
+ struct {
+ char dq_label[16];
+ TAILQ_HEAD(dispatch_queue_specific_head_s,
+ dispatch_queue_specific_s) dqsq_contexts;
+ };
+ };
+};
+DISPATCH_DECL(dispatch_queue_specific_queue);
+
+static void
+_dispatch_queue_specific_queue_dispose(dispatch_queue_specific_queue_t dqsq);
+
+struct dispatch_queue_specific_queue_vtable_s {
+ DISPATCH_VTABLE_HEADER(dispatch_queue_specific_queue_s);
+};
+
+static const struct dispatch_queue_specific_queue_vtable_s
+ _dispatch_queue_specific_queue_vtable = {
+ .do_type = DISPATCH_QUEUE_SPECIFIC_TYPE,
+ .do_kind = "queue-context",
+ .do_dispose = _dispatch_queue_specific_queue_dispose,
+ .do_invoke = NULL,
+ .do_probe = (void *)dummy_function_r0,
+ .do_debug = (void *)dispatch_queue_debug,
+};
+
+struct dispatch_queue_specific_s {
+ const void *dqs_key;
+ void *dqs_ctxt;
+ dispatch_function_t dqs_destructor;
+ TAILQ_ENTRY(dispatch_queue_specific_s) dqs_list;
+};
+DISPATCH_DECL(dispatch_queue_specific);
+
+static void
+_dispatch_queue_specific_queue_dispose(dispatch_queue_specific_queue_t dqsq)
+{
+ dispatch_queue_specific_t dqs, tmp;
+
+ TAILQ_FOREACH_SAFE(dqs, &dqsq->dqsq_contexts, dqs_list, tmp) {
+ if (dqs->dqs_destructor) {
+ dispatch_async_f(_dispatch_get_root_queue(
+ DISPATCH_QUEUE_PRIORITY_DEFAULT, false), dqs->dqs_ctxt,
+ dqs->dqs_destructor);
}
+ free(dqs);
+ }
+ _dispatch_queue_dispose((dispatch_queue_t)dqsq);
+}
- dc = dq->dq_items_head;
- dq->dq_items_head = NULL;
+static void
+_dispatch_queue_init_specific(dispatch_queue_t dq)
+{
+ dispatch_queue_specific_queue_t dqsq;
- do {
- // Enqueue is TIGHTLY controlled, we won't wait long.
- do {
- next_dc = fastpath(dc->do_next);
- } while (!next_dc && !dispatch_atomic_cmpxchg(&dq->dq_items_tail, dc, NULL));
- if (DISPATCH_OBJECT_SUSPENDED(dq)) {
- goto out;
+ dqsq = calloc(1ul, sizeof(struct dispatch_queue_specific_queue_s));
+ _dispatch_queue_init((dispatch_queue_t)dqsq);
+ dqsq->do_vtable = &_dispatch_queue_specific_queue_vtable;
+ dqsq->do_xref_cnt = 0;
+ dqsq->do_targetq = _dispatch_get_root_queue(DISPATCH_QUEUE_PRIORITY_HIGH,
+ true);
+ dqsq->dq_width = UINT32_MAX;
+ strlcpy(dqsq->dq_label, "queue-specific", sizeof(dqsq->dq_label));
+ TAILQ_INIT(&dqsq->dqsq_contexts);
+ dispatch_atomic_store_barrier();
+ if (slowpath(!dispatch_atomic_cmpxchg2o(dq, dq_specific_q, NULL, dqsq))) {
+ _dispatch_release((dispatch_queue_t)dqsq);
+ }
+}
+
+static void
+_dispatch_queue_set_specific(void *ctxt)
+{
+ dispatch_queue_specific_t dqs, dqsn = ctxt;
+ dispatch_queue_specific_queue_t dqsq =
+ (dispatch_queue_specific_queue_t)_dispatch_queue_get_current();
+
+ TAILQ_FOREACH(dqs, &dqsq->dqsq_contexts, dqs_list) {
+ if (dqs->dqs_key == dqsn->dqs_key) {
+ // Destroy previous context for existing key
+ if (dqs->dqs_destructor) {
+ dispatch_async_f(_dispatch_get_root_queue(
+ DISPATCH_QUEUE_PRIORITY_DEFAULT, false), dqs->dqs_ctxt,
+ dqs->dqs_destructor);
}
- if (dq->dq_running > dq->dq_width) {
- goto out;
- }
- if (orig_tq != dq->do_targetq) {
- goto out;
- }
- if (fastpath(dq->dq_width == 1)) {
- _dispatch_continuation_pop(dc);
- _dispatch_workitem_inc();
- } else if ((long)dc->do_vtable & DISPATCH_OBJ_BARRIER_BIT) {
- if (dq->dq_running > 1) {
- goto out;
- }
- _dispatch_continuation_pop(dc);
- _dispatch_workitem_inc();
+ if (dqsn->dqs_ctxt) {
+ // Copy new context for existing key
+ dqs->dqs_ctxt = dqsn->dqs_ctxt;
+ dqs->dqs_destructor = dqsn->dqs_destructor;
} else {
- _dispatch_async_f_redirect(dq, dc);
+ // Remove context storage for existing key
+ TAILQ_REMOVE(&dqsq->dqsq_contexts, dqs, dqs_list);
+ free(dqs);
}
- } while ((dc = next_dc));
- }
-
-out:
- // if this is not a complete drain, we must undo some things
- if (slowpath(dc)) {
- // 'dc' must NOT be "popped"
- // 'dc' might be the last item
- if (next_dc || dispatch_atomic_cmpxchg(&dq->dq_items_tail, NULL, dc)) {
- dq->dq_items_head = dc;
- } else {
- while (!(next_dc = dq->dq_items_head)) {
- _dispatch_hardware_pause();
- }
- dq->dq_items_head = dc;
- dc->do_next = next_dc;
+ return free(dqsn);
}
}
-
- _dispatch_thread_setspecific(dispatch_queue_key, old_dq);
+ // Insert context storage for new key
+ TAILQ_INSERT_TAIL(&dqsq->dqsq_contexts, dqsn, dqs_list);
}
-// 6618342 Contact the team that owns the Instrument DTrace probe before renaming this symbol
+DISPATCH_NOINLINE
+void
+dispatch_queue_set_specific(dispatch_queue_t dq, const void *key,
+ void *ctxt, dispatch_function_t destructor)
+{
+ if (slowpath(!key)) {
+ return;
+ }
+ dispatch_queue_specific_t dqs;
+
+ dqs = calloc(1, sizeof(struct dispatch_queue_specific_s));
+ dqs->dqs_key = key;
+ dqs->dqs_ctxt = ctxt;
+ dqs->dqs_destructor = destructor;
+ if (slowpath(!dq->dq_specific_q)) {
+ _dispatch_queue_init_specific(dq);
+ }
+ dispatch_barrier_async_f(dq->dq_specific_q, dqs,
+ _dispatch_queue_set_specific);
+}
+
+static void
+_dispatch_queue_get_specific(void *ctxt)
+{
+ void **ctxtp = ctxt;
+ void *key = *ctxtp;
+ dispatch_queue_specific_queue_t dqsq =
+ (dispatch_queue_specific_queue_t)_dispatch_queue_get_current();
+ dispatch_queue_specific_t dqs;
+
+ TAILQ_FOREACH(dqs, &dqsq->dqsq_contexts, dqs_list) {
+ if (dqs->dqs_key == key) {
+ *ctxtp = dqs->dqs_ctxt;
+ return;
+ }
+ }
+ *ctxtp = NULL;
+}
+
+DISPATCH_NOINLINE
void *
-_dispatch_worker_thread(void *context)
+dispatch_queue_get_specific(dispatch_queue_t dq, const void *key)
{
- dispatch_queue_t dq = context;
- struct dispatch_root_queue_context_s *qc = dq->do_ctxt;
- sigset_t mask;
- int r;
-
- // workaround tweaks the kernel workqueue does for us
- r = sigfillset(&mask);
- (void)dispatch_assume_zero(r);
- r = _dispatch_pthread_sigmask(SIG_BLOCK, &mask, NULL);
- (void)dispatch_assume_zero(r);
-
- do {
- _dispatch_worker_thread2(context);
- // we use 65 seconds in case there are any timers that run once a minute
- } while (dispatch_semaphore_wait(qc->dgq_thread_mediator, dispatch_time(0, 65ull * NSEC_PER_SEC)) == 0);
-
- dispatch_atomic_inc(&qc->dgq_thread_pool_size);
- if (dq->dq_items_tail) {
- _dispatch_queue_wakeup_global(dq);
+ if (slowpath(!key)) {
+ return NULL;
}
+ void *ctxt = NULL;
- return NULL;
+ if (fastpath(dq->dq_specific_q)) {
+ ctxt = (void *)key;
+ dispatch_sync_f(dq->dq_specific_q, &ctxt, _dispatch_queue_get_specific);
+ }
+ return ctxt;
}
-// 6618342 Contact the team that owns the Instrument DTrace probe before renaming this symbol
-void
-_dispatch_worker_thread2(void *context)
+DISPATCH_NOINLINE
+void *
+dispatch_get_specific(const void *key)
{
- struct dispatch_object_s *item;
- dispatch_queue_t dq = context;
- struct dispatch_root_queue_context_s *qc = dq->do_ctxt;
-
- if (_dispatch_thread_getspecific(dispatch_queue_key)) {
- DISPATCH_CRASH("Premature thread recycling");
+ if (slowpath(!key)) {
+ return NULL;
}
+ void *ctxt = NULL;
+ dispatch_queue_t dq = _dispatch_queue_get_current();
- _dispatch_thread_setspecific(dispatch_queue_key, dq);
- qc->dgq_pending = 0;
-
-#if DISPATCH_COCOA_COMPAT
- // ensure that high-level memory management techniques do not leak/crash
- dispatch_begin_thread_4GC();
- void *pool = _dispatch_begin_NSAutoReleasePool();
-#endif
-
-#if DISPATCH_PERF_MON
- uint64_t start = _dispatch_absolute_time();
-#endif
- while ((item = fastpath(_dispatch_queue_concurrent_drain_one(dq)))) {
- _dispatch_continuation_pop(item);
+ while (slowpath(dq)) {
+ if (slowpath(dq->dq_specific_q)) {
+ ctxt = (void *)key;
+ dispatch_sync_f(dq->dq_specific_q, &ctxt,
+ _dispatch_queue_get_specific);
+ if (ctxt) break;
+ }
+ dq = dq->do_targetq;
}
-#if DISPATCH_PERF_MON
- _dispatch_queue_merge_stats(start);
-#endif
-
-#if DISPATCH_COCOA_COMPAT
- _dispatch_end_NSAutoReleasePool(pool);
- dispatch_end_thread_4GC();
-#endif
-
- _dispatch_thread_setspecific(dispatch_queue_key, NULL);
-
- _dispatch_force_cache_cleanup();
+ return ctxt;
}
-#if DISPATCH_PERF_MON
+#pragma mark -
+#pragma mark dispatch_queue_debug
+
+size_t
+_dispatch_queue_debug_attr(dispatch_queue_t dq, char* buf, size_t bufsiz)
+{
+ dispatch_queue_t target = dq->do_targetq;
+ return snprintf(buf, bufsiz, "target = %s[%p], width = 0x%x, "
+ "running = 0x%x, barrier = %d ", target ? target->dq_label : "",
+ target, dq->dq_width / 2, dq->dq_running / 2, dq->dq_running & 1);
+}
+
+size_t
+dispatch_queue_debug(dispatch_queue_t dq, char* buf, size_t bufsiz)
+{
+ size_t offset = 0;
+ offset += snprintf(&buf[offset], bufsiz - offset, "%s[%p] = { ",
+ dq->dq_label, dq);
+ offset += _dispatch_object_debug_attr(dq, &buf[offset], bufsiz - offset);
+ offset += _dispatch_queue_debug_attr(dq, &buf[offset], bufsiz - offset);
+ offset += snprintf(&buf[offset], bufsiz - offset, "}");
+ return offset;
+}
+
+#if DISPATCH_DEBUG
void
+dispatch_debug_queue(dispatch_queue_t dq, const char* str) {
+ if (fastpath(dq)) {
+ dispatch_debug(dq, "%s", str);
+ } else {
+ _dispatch_log("queue[NULL]: %s", str);
+ }
+}
+#endif
+
+#if DISPATCH_PERF_MON
+static OSSpinLock _dispatch_stats_lock;
+static size_t _dispatch_bad_ratio;
+static struct {
+ uint64_t time_total;
+ uint64_t count_total;
+ uint64_t thread_total;
+} _dispatch_stats[65]; // ffs*/fls*() returns zero when no bits are set
+
+static void
_dispatch_queue_merge_stats(uint64_t start)
{
uint64_t avg, delta = _dispatch_absolute_time() - start;
@@ -1571,153 +967,20 @@
}
#endif
-size_t
-dispatch_queue_debug_attr(dispatch_queue_t dq, char* buf, size_t bufsiz)
-{
- return snprintf(buf, bufsiz, "parent = %p ", dq->do_targetq);
-}
+#pragma mark -
+#pragma mark dispatch_continuation_t
-size_t
-dispatch_queue_debug(dispatch_queue_t dq, char* buf, size_t bufsiz)
-{
- size_t offset = 0;
- offset += snprintf(&buf[offset], bufsiz - offset, "%s[%p] = { ", dq->dq_label, dq);
- offset += dispatch_object_debug_attr(dq, &buf[offset], bufsiz - offset);
- offset += dispatch_queue_debug_attr(dq, &buf[offset], bufsiz - offset);
- offset += snprintf(&buf[offset], bufsiz - offset, "}");
- return offset;
-}
-
-#if DISPATCH_DEBUG
-void
-dispatch_debug_queue(dispatch_queue_t dq, const char* str) {
- if (fastpath(dq)) {
- dispatch_debug(dq, "%s", str);
- } else {
- _dispatch_log("queue[NULL]: %s", str);
- }
-}
-#endif
-
-#if DISPATCH_COCOA_COMPAT
-void
-_dispatch_main_queue_callback_4CF(mach_msg_header_t *msg __attribute__((unused)))
-{
- if (main_q_is_draining) {
- return;
- }
- _dispatch_queue_set_mainq_drain_state(true);
- _dispatch_queue_serial_drain_till_empty(&_dispatch_main_q);
- _dispatch_queue_set_mainq_drain_state(false);
-}
-
-mach_port_t
-_dispatch_get_main_queue_port_4CF(void)
-{
- dispatch_once_f(&_dispatch_main_q_port_pred, NULL, _dispatch_main_q_port_init);
- return main_q_port;
-}
-#endif
-
-#ifndef DISPATCH_NO_LEGACY
-static void
-dispatch_queue_attr_dispose(dispatch_queue_attr_t attr)
-{
- dispatch_queue_attr_set_finalizer_f(attr, NULL, NULL);
- _dispatch_dispose(attr);
-}
-
-static const struct dispatch_queue_attr_vtable_s dispatch_queue_attr_vtable = {
- .do_type = DISPATCH_QUEUE_ATTR_TYPE,
- .do_kind = "queue-attr",
- .do_dispose = dispatch_queue_attr_dispose,
-};
-
-dispatch_queue_attr_t
-dispatch_queue_attr_create(void)
-{
- dispatch_queue_attr_t a = calloc(1, sizeof(struct dispatch_queue_attr_s));
-
- if (a) {
- a->do_vtable = &dispatch_queue_attr_vtable;
- a->do_next = DISPATCH_OBJECT_LISTLESS;
- a->do_ref_cnt = 1;
- a->do_xref_cnt = 1;
- a->do_targetq = _dispatch_get_root_queue(0, 0);
- a->qa_flags = DISPATCH_QUEUE_OVERCOMMIT;
- }
- return a;
-}
-
-void
-dispatch_queue_attr_set_flags(dispatch_queue_attr_t attr, uint64_t flags)
-{
- dispatch_assert_zero(flags & ~DISPATCH_QUEUE_FLAGS_MASK);
- attr->qa_flags = (unsigned long)flags & DISPATCH_QUEUE_FLAGS_MASK;
-}
-
-void
-dispatch_queue_attr_set_priority(dispatch_queue_attr_t attr, int priority)
-{
- dispatch_debug_assert(attr, "NULL pointer");
- dispatch_debug_assert(priority <= DISPATCH_QUEUE_PRIORITY_HIGH && priority >= DISPATCH_QUEUE_PRIORITY_LOW, "Invalid priority");
-
- if (priority > 0) {
- priority = DISPATCH_QUEUE_PRIORITY_HIGH;
- } else if (priority < 0) {
- priority = DISPATCH_QUEUE_PRIORITY_LOW;
- }
-
- attr->qa_priority = priority;
-}
-
-void
-dispatch_queue_attr_set_finalizer_f(dispatch_queue_attr_t attr,
- void *context, dispatch_queue_finalizer_function_t finalizer)
-{
-#ifdef __BLOCKS__
- if (attr->finalizer_func == (void*)_dispatch_call_block_and_release2) {
- Block_release(attr->finalizer_ctxt);
- }
-#endif
- attr->finalizer_ctxt = context;
- attr->finalizer_func = finalizer;
-}
-
-#ifdef __BLOCKS__
-long
-dispatch_queue_attr_set_finalizer(dispatch_queue_attr_t attr,
- dispatch_queue_finalizer_t finalizer)
-{
- void *ctxt;
- dispatch_queue_finalizer_function_t func;
-
- if (finalizer) {
- if (!(ctxt = Block_copy(finalizer))) {
- return 1;
- }
- func = (void *)_dispatch_call_block_and_release2;
- } else {
- ctxt = NULL;
- func = NULL;
- }
-
- dispatch_queue_attr_set_finalizer_f(attr, ctxt, func);
-
- return 0;
-}
-#endif
-#endif /* DISPATCH_NO_LEGACY */
+static malloc_zone_t *_dispatch_ccache_zone;
static void
-_dispatch_ccache_init(void *context __attribute__((unused)))
+_dispatch_ccache_init(void *context DISPATCH_UNUSED)
{
_dispatch_ccache_zone = malloc_create_zone(0, 0);
dispatch_assert(_dispatch_ccache_zone);
malloc_set_zone_name(_dispatch_ccache_zone, "DispatchContinuations");
}
-dispatch_continuation_t
+static dispatch_continuation_t
_dispatch_continuation_alloc_from_heap(void)
{
static dispatch_once_t pred;
@@ -1725,26 +988,40 @@
dispatch_once_f(&pred, NULL, _dispatch_ccache_init);
- while (!(dc = fastpath(malloc_zone_calloc(_dispatch_ccache_zone, 1, ROUND_UP_TO_CACHELINE_SIZE(sizeof(*dc)))))) {
+ while (!(dc = fastpath(malloc_zone_calloc(_dispatch_ccache_zone, 1,
+ ROUND_UP_TO_CACHELINE_SIZE(sizeof(*dc)))))) {
sleep(1);
}
return dc;
}
-void
+DISPATCH_ALWAYS_INLINE
+static inline dispatch_continuation_t
+_dispatch_continuation_alloc_cacheonly(void)
+{
+ dispatch_continuation_t dc;
+ dc = fastpath(_dispatch_thread_getspecific(dispatch_cache_key));
+ if (dc) {
+ _dispatch_thread_setspecific(dispatch_cache_key, dc->do_next);
+ }
+ return dc;
+}
+
+static void
_dispatch_force_cache_cleanup(void)
{
- dispatch_continuation_t dc = _dispatch_thread_getspecific(dispatch_cache_key);
+ dispatch_continuation_t dc;
+ dc = _dispatch_thread_getspecific(dispatch_cache_key);
if (dc) {
_dispatch_thread_setspecific(dispatch_cache_key, NULL);
- _dispatch_cache_cleanup2(dc);
+ _dispatch_cache_cleanup(dc);
}
}
DISPATCH_NOINLINE
-void
-_dispatch_cache_cleanup2(void *value)
+static void
+_dispatch_cache_cleanup(void *value)
{
dispatch_continuation_t dc, next_dc = value;
@@ -1754,85 +1031,1280 @@
}
}
-static char _dispatch_build[16];
-
-/*
- * XXXRW: What to do here for !Mac OS X?
- */
-static void
-_dispatch_bug_init(void *context __attribute__((unused)))
+DISPATCH_ALWAYS_INLINE
+static inline void
+_dispatch_continuation_free(dispatch_continuation_t dc)
{
-#ifdef __APPLE__
- int mib[] = { CTL_KERN, KERN_OSVERSION };
- size_t bufsz = sizeof(_dispatch_build);
-
- sysctl(mib, 2, _dispatch_build, &bufsz, NULL, 0);
-#else
- memset(_dispatch_build, 0, sizeof(_dispatch_build));
-#endif
+ dispatch_continuation_t prev_dc;
+ prev_dc = _dispatch_thread_getspecific(dispatch_cache_key);
+ dc->do_next = prev_dc;
+ _dispatch_thread_setspecific(dispatch_cache_key, dc);
}
-void
-_dispatch_bug(size_t line, long val)
+DISPATCH_ALWAYS_INLINE_NDEBUG
+static inline void
+_dispatch_continuation_redirect(dispatch_queue_t dq, dispatch_object_t dou)
{
- static dispatch_once_t pred;
- static void *last_seen;
- void *ra = __builtin_return_address(0);
+ dispatch_continuation_t dc = dou._dc;
- dispatch_once_f(&pred, NULL, _dispatch_bug_init);
- if (last_seen != ra) {
- last_seen = ra;
- _dispatch_log("BUG in libdispatch: %s - %lu - 0x%lx", _dispatch_build, (unsigned long)line, val);
+ _dispatch_trace_continuation_pop(dq, dou);
+ (void)dispatch_atomic_add2o(dq, dq_running, 2);
+ if (!DISPATCH_OBJ_IS_VTABLE(dc) &&
+ (long)dc->do_vtable & DISPATCH_OBJ_SYNC_SLOW_BIT) {
+ dispatch_atomic_barrier();
+ _dispatch_thread_semaphore_signal(
+ (_dispatch_thread_semaphore_t)dc->dc_ctxt);
+ } else {
+ _dispatch_async_f_redirect(dq, dc);
}
}
-void
-_dispatch_abort(size_t line, long val)
+DISPATCH_ALWAYS_INLINE_NDEBUG
+static inline void
+_dispatch_continuation_pop(dispatch_object_t dou)
{
- _dispatch_bug(line, val);
- abort();
+ dispatch_continuation_t dc = dou._dc;
+ dispatch_group_t dg;
+
+ _dispatch_trace_continuation_pop(_dispatch_queue_get_current(), dou);
+ if (DISPATCH_OBJ_IS_VTABLE(dou._do)) {
+ return _dispatch_queue_invoke(dou._dq);
+ }
+
+ // Add the item back to the cache before calling the function. This
+ // allows the 'hot' continuation to be used for a quick callback.
+ //
+ // The ccache version is per-thread.
+ // Therefore, the object has not been reused yet.
+ // This generates better assembly.
+ if ((long)dc->do_vtable & DISPATCH_OBJ_ASYNC_BIT) {
+ _dispatch_continuation_free(dc);
+ }
+ if ((long)dc->do_vtable & DISPATCH_OBJ_GROUP_BIT) {
+ dg = dc->dc_group;
+ } else {
+ dg = NULL;
+ }
+ _dispatch_client_callout(dc->dc_ctxt, dc->dc_func);
+ if (dg) {
+ dispatch_group_leave(dg);
+ _dispatch_release(dg);
+ }
}
-void
-_dispatch_log(const char *msg, ...)
+#pragma mark -
+#pragma mark dispatch_barrier_async
+
+DISPATCH_NOINLINE
+static void
+_dispatch_barrier_async_f_slow(dispatch_queue_t dq, void *ctxt,
+ dispatch_function_t func)
{
- va_list ap;
+ dispatch_continuation_t dc = _dispatch_continuation_alloc_from_heap();
- va_start(ap, msg);
+ dc->do_vtable = (void *)(DISPATCH_OBJ_ASYNC_BIT | DISPATCH_OBJ_BARRIER_BIT);
+ dc->dc_func = func;
+ dc->dc_ctxt = ctxt;
- _dispatch_logv(msg, ap);
-
- va_end(ap);
+ _dispatch_queue_push(dq, dc);
}
+DISPATCH_NOINLINE
void
-_dispatch_logv(const char *msg, va_list ap)
+dispatch_barrier_async_f(dispatch_queue_t dq, void *ctxt,
+ dispatch_function_t func)
{
-#if DISPATCH_DEBUG
- static FILE *logfile, *tmp;
- char newbuf[strlen(msg) + 2];
- char path[PATH_MAX];
+ dispatch_continuation_t dc;
- sprintf(newbuf, "%s\n", msg);
+ dc = fastpath(_dispatch_continuation_alloc_cacheonly());
+ if (!dc) {
+ return _dispatch_barrier_async_f_slow(dq, ctxt, func);
+ }
- if (!logfile) {
- snprintf(path, sizeof(path), "/var/tmp/libdispatch.%d.log", getpid());
- tmp = fopen(path, "a");
- assert(tmp);
- if (!dispatch_atomic_cmpxchg(&logfile, NULL, tmp)) {
- fclose(tmp);
- } else {
- struct timeval tv;
- gettimeofday(&tv, NULL);
- fprintf(logfile, "=== log file opened for %s[%u] at %ld.%06u ===\n",
- getprogname() ?: "", getpid(), tv.tv_sec, tv.tv_usec);
+ dc->do_vtable = (void *)(DISPATCH_OBJ_ASYNC_BIT | DISPATCH_OBJ_BARRIER_BIT);
+ dc->dc_func = func;
+ dc->dc_ctxt = ctxt;
+
+ _dispatch_queue_push(dq, dc);
+}
+
+#ifdef __BLOCKS__
+void
+dispatch_barrier_async(dispatch_queue_t dq, void (^work)(void))
+{
+ dispatch_barrier_async_f(dq, _dispatch_Block_copy(work),
+ _dispatch_call_block_and_release);
+}
+#endif
+
+#pragma mark -
+#pragma mark dispatch_async
+
+static void
+_dispatch_async_f_redirect_invoke(void *_ctxt)
+{
+ struct dispatch_continuation_s *dc = _ctxt;
+ struct dispatch_continuation_s *other_dc = dc->dc_data[1];
+ dispatch_queue_t old_dq, dq = dc->dc_data[0], rq;
+
+ old_dq = _dispatch_thread_getspecific(dispatch_queue_key);
+ _dispatch_thread_setspecific(dispatch_queue_key, dq);
+ _dispatch_continuation_pop(other_dc);
+ _dispatch_thread_setspecific(dispatch_queue_key, old_dq);
+
+ rq = dq->do_targetq;
+ while (slowpath(rq->do_targetq) && rq != old_dq) {
+ if (dispatch_atomic_sub2o(rq, dq_running, 2) == 0) {
+ _dispatch_wakeup(rq);
+ }
+ rq = rq->do_targetq;
+ }
+
+ if (dispatch_atomic_sub2o(dq, dq_running, 2) == 0) {
+ _dispatch_wakeup(dq);
+ }
+ _dispatch_release(dq);
+}
+
+DISPATCH_NOINLINE
+static void
+_dispatch_async_f2_slow(dispatch_queue_t dq, dispatch_continuation_t dc)
+{
+ _dispatch_wakeup(dq);
+ _dispatch_queue_push(dq, dc);
+}
+
+DISPATCH_NOINLINE
+static void
+_dispatch_async_f_redirect(dispatch_queue_t dq,
+ dispatch_continuation_t other_dc)
+{
+ dispatch_continuation_t dc;
+ dispatch_queue_t rq;
+
+ _dispatch_retain(dq);
+
+ dc = fastpath(_dispatch_continuation_alloc_cacheonly());
+ if (!dc) {
+ dc = _dispatch_continuation_alloc_from_heap();
+ }
+
+ dc->do_vtable = (void *)DISPATCH_OBJ_ASYNC_BIT;
+ dc->dc_func = _dispatch_async_f_redirect_invoke;
+ dc->dc_ctxt = dc;
+ dc->dc_data[0] = dq;
+ dc->dc_data[1] = other_dc;
+
+ // Find the queue to redirect to
+ rq = dq->do_targetq;
+ while (slowpath(rq->do_targetq)) {
+ uint32_t running;
+
+ if (slowpath(rq->dq_items_tail) ||
+ slowpath(DISPATCH_OBJECT_SUSPENDED(rq)) ||
+ slowpath(rq->dq_width == 1)) {
+ break;
+ }
+ running = dispatch_atomic_add2o(rq, dq_running, 2) - 2;
+ if (slowpath(running & 1) || slowpath(running + 2 > rq->dq_width)) {
+ if (slowpath(dispatch_atomic_sub2o(rq, dq_running, 2) == 0)) {
+ return _dispatch_async_f2_slow(rq, dc);
+ }
+ break;
+ }
+ rq = rq->do_targetq;
+ }
+ _dispatch_queue_push(rq, dc);
+}
+
+DISPATCH_NOINLINE
+static void
+_dispatch_async_f2(dispatch_queue_t dq, dispatch_continuation_t dc)
+{
+ uint32_t running;
+ bool locked;
+
+ do {
+ if (slowpath(dq->dq_items_tail)
+ || slowpath(DISPATCH_OBJECT_SUSPENDED(dq))) {
+ break;
+ }
+ running = dispatch_atomic_add2o(dq, dq_running, 2);
+ if (slowpath(running > dq->dq_width)) {
+ if (slowpath(dispatch_atomic_sub2o(dq, dq_running, 2) == 0)) {
+ return _dispatch_async_f2_slow(dq, dc);
+ }
+ break;
+ }
+ locked = running & 1;
+ if (fastpath(!locked)) {
+ return _dispatch_async_f_redirect(dq, dc);
+ }
+ locked = dispatch_atomic_sub2o(dq, dq_running, 2) & 1;
+ // We might get lucky and find that the barrier has ended by now
+ } while (!locked);
+
+ _dispatch_queue_push(dq, dc);
+}
+
+DISPATCH_NOINLINE
+static void
+_dispatch_async_f_slow(dispatch_queue_t dq, void *ctxt,
+ dispatch_function_t func)
+{
+ dispatch_continuation_t dc = _dispatch_continuation_alloc_from_heap();
+
+ dc->do_vtable = (void *)DISPATCH_OBJ_ASYNC_BIT;
+ dc->dc_func = func;
+ dc->dc_ctxt = ctxt;
+
+ // No fastpath/slowpath hint because we simply don't know
+ if (dq->do_targetq) {
+ return _dispatch_async_f2(dq, dc);
+ }
+
+ _dispatch_queue_push(dq, dc);
+}
+
+DISPATCH_NOINLINE
+void
+dispatch_async_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func)
+{
+ dispatch_continuation_t dc;
+
+ // No fastpath/slowpath hint because we simply don't know
+ if (dq->dq_width == 1) {
+ return dispatch_barrier_async_f(dq, ctxt, func);
+ }
+
+ dc = fastpath(_dispatch_continuation_alloc_cacheonly());
+ if (!dc) {
+ return _dispatch_async_f_slow(dq, ctxt, func);
+ }
+
+ dc->do_vtable = (void *)DISPATCH_OBJ_ASYNC_BIT;
+ dc->dc_func = func;
+ dc->dc_ctxt = ctxt;
+
+ // No fastpath/slowpath hint because we simply don't know
+ if (dq->do_targetq) {
+ return _dispatch_async_f2(dq, dc);
+ }
+
+ _dispatch_queue_push(dq, dc);
+}
+
+#ifdef __BLOCKS__
+void
+dispatch_async(dispatch_queue_t dq, void (^work)(void))
+{
+ dispatch_async_f(dq, _dispatch_Block_copy(work),
+ _dispatch_call_block_and_release);
+}
+#endif
+
+#pragma mark -
+#pragma mark dispatch_group_async
+
+DISPATCH_NOINLINE
+void
+dispatch_group_async_f(dispatch_group_t dg, dispatch_queue_t dq, void *ctxt,
+ dispatch_function_t func)
+{
+ dispatch_continuation_t dc;
+
+ _dispatch_retain(dg);
+ dispatch_group_enter(dg);
+
+ dc = fastpath(_dispatch_continuation_alloc_cacheonly());
+ if (!dc) {
+ dc = _dispatch_continuation_alloc_from_heap();
+ }
+
+ dc->do_vtable = (void *)(DISPATCH_OBJ_ASYNC_BIT | DISPATCH_OBJ_GROUP_BIT);
+ dc->dc_func = func;
+ dc->dc_ctxt = ctxt;
+ dc->dc_group = dg;
+
+ // No fastpath/slowpath hint because we simply don't know
+ if (dq->dq_width != 1 && dq->do_targetq) {
+ return _dispatch_async_f2(dq, dc);
+ }
+
+ _dispatch_queue_push(dq, dc);
+}
+
+#ifdef __BLOCKS__
+void
+dispatch_group_async(dispatch_group_t dg, dispatch_queue_t dq,
+ dispatch_block_t db)
+{
+ dispatch_group_async_f(dg, dq, _dispatch_Block_copy(db),
+ _dispatch_call_block_and_release);
+}
+#endif
+
+#pragma mark -
+#pragma mark dispatch_function_invoke
+
+DISPATCH_ALWAYS_INLINE
+static inline void
+_dispatch_function_invoke(dispatch_queue_t dq, void *ctxt,
+ dispatch_function_t func)
+{
+ dispatch_queue_t old_dq = _dispatch_thread_getspecific(dispatch_queue_key);
+ _dispatch_thread_setspecific(dispatch_queue_key, dq);
+ _dispatch_client_callout(ctxt, func);
+ _dispatch_workitem_inc();
+ _dispatch_thread_setspecific(dispatch_queue_key, old_dq);
+}
+
+struct dispatch_function_recurse_s {
+ dispatch_queue_t dfr_dq;
+ void* dfr_ctxt;
+ dispatch_function_t dfr_func;
+};
+
+static void
+_dispatch_function_recurse_invoke(void *ctxt)
+{
+ struct dispatch_function_recurse_s *dfr = ctxt;
+ _dispatch_function_invoke(dfr->dfr_dq, dfr->dfr_ctxt, dfr->dfr_func);
+}
+
+DISPATCH_ALWAYS_INLINE
+static inline void
+_dispatch_function_recurse(dispatch_queue_t dq, void *ctxt,
+ dispatch_function_t func)
+{
+ struct dispatch_function_recurse_s dfr = {
+ .dfr_dq = dq,
+ .dfr_func = func,
+ .dfr_ctxt = ctxt,
+ };
+ dispatch_sync_f(dq->do_targetq, &dfr, _dispatch_function_recurse_invoke);
+}
+
+#pragma mark -
+#pragma mark dispatch_barrier_sync
+
+struct dispatch_barrier_sync_slow_s {
+ DISPATCH_CONTINUATION_HEADER(dispatch_barrier_sync_slow_s);
+};
+
+struct dispatch_barrier_sync_slow2_s {
+ dispatch_queue_t dbss2_dq;
+#if DISPATCH_COCOA_COMPAT
+ dispatch_function_t dbss2_func;
+ void *dbss2_ctxt;
+#endif
+ _dispatch_thread_semaphore_t dbss2_sema;
+};
+
+static void
+_dispatch_barrier_sync_f_slow_invoke(void *ctxt)
+{
+ struct dispatch_barrier_sync_slow2_s *dbss2 = ctxt;
+
+ dispatch_assert(dbss2->dbss2_dq == _dispatch_queue_get_current());
+#if DISPATCH_COCOA_COMPAT
+ // When the main queue is bound to the main thread
+ if (dbss2->dbss2_dq == &_dispatch_main_q && pthread_main_np()) {
+ dbss2->dbss2_func(dbss2->dbss2_ctxt);
+ dbss2->dbss2_func = NULL;
+ dispatch_atomic_barrier();
+ _dispatch_thread_semaphore_signal(dbss2->dbss2_sema);
+ return;
+ }
+#endif
+ (void)dispatch_atomic_add2o(dbss2->dbss2_dq, do_suspend_cnt,
+ DISPATCH_OBJECT_SUSPEND_INTERVAL);
+ // rdar://9032024 running lock must be held until sync_f_slow returns
+ (void)dispatch_atomic_add2o(dbss2->dbss2_dq, dq_running, 2);
+ dispatch_atomic_barrier();
+ _dispatch_thread_semaphore_signal(dbss2->dbss2_sema);
+}
+
+DISPATCH_NOINLINE
+static void
+_dispatch_barrier_sync_f_slow(dispatch_queue_t dq, void *ctxt,
+ dispatch_function_t func)
+{
+ // It's preferred to execute synchronous blocks on the current thread
+ // due to thread-local side effects, garbage collection, etc. However,
+ // blocks submitted to the main thread MUST be run on the main thread
+
+ struct dispatch_barrier_sync_slow2_s dbss2 = {
+ .dbss2_dq = dq,
+#if DISPATCH_COCOA_COMPAT
+ .dbss2_func = func,
+ .dbss2_ctxt = ctxt,
+#endif
+ .dbss2_sema = _dispatch_get_thread_semaphore(),
+ };
+ struct dispatch_barrier_sync_slow_s dbss = {
+ .do_vtable = (void *)(DISPATCH_OBJ_BARRIER_BIT |
+ DISPATCH_OBJ_SYNC_SLOW_BIT),
+ .dc_func = _dispatch_barrier_sync_f_slow_invoke,
+ .dc_ctxt = &dbss2,
+ };
+ _dispatch_queue_push(dq, (void *)&dbss);
+
+ _dispatch_thread_semaphore_wait(dbss2.dbss2_sema);
+ _dispatch_put_thread_semaphore(dbss2.dbss2_sema);
+
+#if DISPATCH_COCOA_COMPAT
+ // Main queue bound to main thread
+ if (dbss2.dbss2_func == NULL) {
+ return;
+ }
+#endif
+ dispatch_atomic_acquire_barrier();
+ if (slowpath(dq->do_targetq) && slowpath(dq->do_targetq->do_targetq)) {
+ _dispatch_function_recurse(dq, ctxt, func);
+ } else {
+ _dispatch_function_invoke(dq, ctxt, func);
+ }
+ dispatch_atomic_release_barrier();
+ if (fastpath(dq->do_suspend_cnt < 2 * DISPATCH_OBJECT_SUSPEND_INTERVAL)) {
+ // rdar://problem/8290662 "lock transfer"
+ // ensure drain of current barrier sync has finished
+ while (slowpath(dq->dq_running > 2)) {
+ _dispatch_hardware_pause();
+ }
+ _dispatch_thread_semaphore_t sema;
+ sema = _dispatch_queue_drain_one_barrier_sync(dq);
+ if (sema) {
+ _dispatch_thread_semaphore_signal(sema);
+ return;
}
}
- vfprintf(logfile, newbuf, ap);
- fflush(logfile);
-#else
- vsyslog(LOG_NOTICE, msg, ap);
+ (void)dispatch_atomic_sub2o(dq, do_suspend_cnt,
+ DISPATCH_OBJECT_SUSPEND_INTERVAL);
+ if (slowpath(dispatch_atomic_sub2o(dq, dq_running, 2) == 0)) {
+ _dispatch_wakeup(dq);
+ }
+}
+
+DISPATCH_NOINLINE
+static void
+_dispatch_barrier_sync_f2(dispatch_queue_t dq)
+{
+ if (!slowpath(DISPATCH_OBJECT_SUSPENDED(dq))) {
+ // rdar://problem/8290662 "lock transfer"
+ _dispatch_thread_semaphore_t sema;
+ sema = _dispatch_queue_drain_one_barrier_sync(dq);
+ if (sema) {
+ (void)dispatch_atomic_add2o(dq, do_suspend_cnt,
+ DISPATCH_OBJECT_SUSPEND_INTERVAL);
+ // rdar://9032024 running lock must be held until sync_f_slow
+ // returns: increment by 2 and decrement by 1
+ (void)dispatch_atomic_inc2o(dq, dq_running);
+ _dispatch_thread_semaphore_signal(sema);
+ return;
+ }
+ }
+ if (slowpath(dispatch_atomic_dec2o(dq, dq_running) == 0)) {
+ _dispatch_wakeup(dq);
+ }
+}
+
+DISPATCH_NOINLINE
+static void
+_dispatch_barrier_sync_f_invoke(dispatch_queue_t dq, void *ctxt,
+ dispatch_function_t func)
+{
+ dispatch_atomic_acquire_barrier();
+ _dispatch_function_invoke(dq, ctxt, func);
+ dispatch_atomic_release_barrier();
+ if (slowpath(dq->dq_items_tail)) {
+ return _dispatch_barrier_sync_f2(dq);
+ }
+ if (slowpath(dispatch_atomic_dec2o(dq, dq_running) == 0)) {
+ _dispatch_wakeup(dq);
+ }
+}
+
+DISPATCH_NOINLINE
+static void
+_dispatch_barrier_sync_f_recurse(dispatch_queue_t dq, void *ctxt,
+ dispatch_function_t func)
+{
+ dispatch_atomic_acquire_barrier();
+ _dispatch_function_recurse(dq, ctxt, func);
+ dispatch_atomic_release_barrier();
+ if (slowpath(dq->dq_items_tail)) {
+ return _dispatch_barrier_sync_f2(dq);
+ }
+ if (slowpath(dispatch_atomic_dec2o(dq, dq_running) == 0)) {
+ _dispatch_wakeup(dq);
+ }
+}
+
+DISPATCH_NOINLINE
+void
+dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,
+ dispatch_function_t func)
+{
+ // 1) ensure that this thread hasn't enqueued anything ahead of this call
+ // 2) the queue is not suspended
+ if (slowpath(dq->dq_items_tail) || slowpath(DISPATCH_OBJECT_SUSPENDED(dq))){
+ return _dispatch_barrier_sync_f_slow(dq, ctxt, func);
+ }
+ if (slowpath(!dispatch_atomic_cmpxchg2o(dq, dq_running, 0, 1))) {
+ // global queues and main queue bound to main thread always falls into
+ // the slow case
+ return _dispatch_barrier_sync_f_slow(dq, ctxt, func);
+ }
+ if (slowpath(dq->do_targetq->do_targetq)) {
+ return _dispatch_barrier_sync_f_recurse(dq, ctxt, func);
+ }
+ _dispatch_barrier_sync_f_invoke(dq, ctxt, func);
+}
+
+#ifdef __BLOCKS__
+#if DISPATCH_COCOA_COMPAT
+DISPATCH_NOINLINE
+static void
+_dispatch_barrier_sync_slow(dispatch_queue_t dq, void (^work)(void))
+{
+ // Blocks submitted to the main queue MUST be run on the main thread,
+ // therefore under GC we must Block_copy in order to notify the thread-local
+ // garbage collector that the objects are transferring to the main thread
+ // rdar://problem/7176237&7181849&7458685
+ if (dispatch_begin_thread_4GC) {
+ dispatch_block_t block = _dispatch_Block_copy(work);
+ return dispatch_barrier_sync_f(dq, block,
+ _dispatch_call_block_and_release);
+ }
+ struct Block_basic *bb = (void *)work;
+ dispatch_barrier_sync_f(dq, work, (dispatch_function_t)bb->Block_invoke);
+}
#endif
+
+void
+dispatch_barrier_sync(dispatch_queue_t dq, void (^work)(void))
+{
+#if DISPATCH_COCOA_COMPAT
+ if (slowpath(dq == &_dispatch_main_q)) {
+ return _dispatch_barrier_sync_slow(dq, work);
+ }
+#endif
+ struct Block_basic *bb = (void *)work;
+ dispatch_barrier_sync_f(dq, work, (dispatch_function_t)bb->Block_invoke);
+}
+#endif
+
+#pragma mark -
+#pragma mark dispatch_sync
+
+DISPATCH_NOINLINE
+static void
+_dispatch_sync_f_slow(dispatch_queue_t dq, void *ctxt, dispatch_function_t func)
+{
+ _dispatch_thread_semaphore_t sema = _dispatch_get_thread_semaphore();
+ struct dispatch_sync_slow_s {
+ DISPATCH_CONTINUATION_HEADER(dispatch_sync_slow_s);
+ } dss = {
+ .do_vtable = (void*)DISPATCH_OBJ_SYNC_SLOW_BIT,
+ .dc_ctxt = (void*)sema,
+ };
+ _dispatch_queue_push(dq, (void *)&dss);
+
+ _dispatch_thread_semaphore_wait(sema);
+ _dispatch_put_thread_semaphore(sema);
+
+ if (slowpath(dq->do_targetq->do_targetq)) {
+ _dispatch_function_recurse(dq, ctxt, func);
+ } else {
+ _dispatch_function_invoke(dq, ctxt, func);
+ }
+ if (slowpath(dispatch_atomic_sub2o(dq, dq_running, 2) == 0)) {
+ _dispatch_wakeup(dq);
+ }
+}
+
+DISPATCH_NOINLINE
+static void
+_dispatch_sync_f_slow2(dispatch_queue_t dq, void *ctxt,
+ dispatch_function_t func)
+{
+ if (slowpath(dispatch_atomic_sub2o(dq, dq_running, 2) == 0)) {
+ _dispatch_wakeup(dq);
+ }
+ _dispatch_sync_f_slow(dq, ctxt, func);
+}
+
+DISPATCH_NOINLINE
+static void
+_dispatch_sync_f_invoke(dispatch_queue_t dq, void *ctxt,
+ dispatch_function_t func)
+{
+ _dispatch_function_invoke(dq, ctxt, func);
+ if (slowpath(dispatch_atomic_sub2o(dq, dq_running, 2) == 0)) {
+ _dispatch_wakeup(dq);
+ }
+}
+
+DISPATCH_NOINLINE
+static void
+_dispatch_sync_f_recurse(dispatch_queue_t dq, void *ctxt,
+ dispatch_function_t func)
+{
+ _dispatch_function_recurse(dq, ctxt, func);
+ if (slowpath(dispatch_atomic_sub2o(dq, dq_running, 2) == 0)) {
+ _dispatch_wakeup(dq);
+ }
+}
+
+DISPATCH_NOINLINE
+static void
+_dispatch_sync_f2(dispatch_queue_t dq, void *ctxt, dispatch_function_t func)
+{
+ // 1) ensure that this thread hasn't enqueued anything ahead of this call
+ // 2) the queue is not suspended
+ if (slowpath(dq->dq_items_tail) || slowpath(DISPATCH_OBJECT_SUSPENDED(dq))){
+ return _dispatch_sync_f_slow(dq, ctxt, func);
+ }
+ if (slowpath(dispatch_atomic_add2o(dq, dq_running, 2) & 1)) {
+ return _dispatch_sync_f_slow2(dq, ctxt, func);
+ }
+ if (slowpath(dq->do_targetq->do_targetq)) {
+ return _dispatch_sync_f_recurse(dq, ctxt, func);
+ }
+ _dispatch_sync_f_invoke(dq, ctxt, func);
+}
+
+DISPATCH_NOINLINE
+void
+dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func)
+{
+ if (fastpath(dq->dq_width == 1)) {
+ return dispatch_barrier_sync_f(dq, ctxt, func);
+ }
+ if (slowpath(!dq->do_targetq)) {
+ // the global root queues do not need strict ordering
+ (void)dispatch_atomic_add2o(dq, dq_running, 2);
+ return _dispatch_sync_f_invoke(dq, ctxt, func);
+ }
+ _dispatch_sync_f2(dq, ctxt, func);
+}
+
+#ifdef __BLOCKS__
+#if DISPATCH_COCOA_COMPAT
+DISPATCH_NOINLINE
+static void
+_dispatch_sync_slow(dispatch_queue_t dq, void (^work)(void))
+{
+ // Blocks submitted to the main queue MUST be run on the main thread,
+ // therefore under GC we must Block_copy in order to notify the thread-local
+ // garbage collector that the objects are transferring to the main thread
+ // rdar://problem/7176237&7181849&7458685
+ if (dispatch_begin_thread_4GC) {
+ dispatch_block_t block = _dispatch_Block_copy(work);
+ return dispatch_sync_f(dq, block, _dispatch_call_block_and_release);
+ }
+ struct Block_basic *bb = (void *)work;
+ dispatch_sync_f(dq, work, (dispatch_function_t)bb->Block_invoke);
+}
+#endif
+
+void
+dispatch_sync(dispatch_queue_t dq, void (^work)(void))
+{
+#if DISPATCH_COCOA_COMPAT
+ if (slowpath(dq == &_dispatch_main_q)) {
+ return _dispatch_sync_slow(dq, work);
+ }
+#endif
+ struct Block_basic *bb = (void *)work;
+ dispatch_sync_f(dq, work, (dispatch_function_t)bb->Block_invoke);
+}
+#endif
+
+#pragma mark -
+#pragma mark dispatch_after
+
+struct _dispatch_after_time_s {
+ void *datc_ctxt;
+ void (*datc_func)(void *);
+ dispatch_source_t ds;
+};
+
+static void
+_dispatch_after_timer_callback(void *ctxt)
+{
+ struct _dispatch_after_time_s *datc = ctxt;
+
+ dispatch_assert(datc->datc_func);
+ _dispatch_client_callout(datc->datc_ctxt, datc->datc_func);
+
+ dispatch_source_t ds = datc->ds;
+ free(datc);
+
+ dispatch_source_cancel(ds); // Needed until 7287561 gets integrated
+ dispatch_release(ds);
+}
+
+DISPATCH_NOINLINE
+void
+dispatch_after_f(dispatch_time_t when, dispatch_queue_t queue, void *ctxt,
+ dispatch_function_t func)
+{
+ uint64_t delta;
+ struct _dispatch_after_time_s *datc = NULL;
+ dispatch_source_t ds;
+
+ if (when == DISPATCH_TIME_FOREVER) {
+#if DISPATCH_DEBUG
+ DISPATCH_CLIENT_CRASH(
+ "dispatch_after_f() called with 'when' == infinity");
+#endif
+ return;
+ }
+
+ // this function can and should be optimized to not use a dispatch source
+ delta = _dispatch_timeout(when);
+ if (delta == 0) {
+ return dispatch_async_f(queue, ctxt, func);
+ }
+ // on successful creation, source owns malloc-ed context (which it frees in
+ // the event handler)
+ ds = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue);
+ dispatch_assert(ds);
+
+ datc = malloc(sizeof(*datc));
+ dispatch_assert(datc);
+ datc->datc_ctxt = ctxt;
+ datc->datc_func = func;
+ datc->ds = ds;
+
+ dispatch_set_context(ds, datc);
+ dispatch_source_set_event_handler_f(ds, _dispatch_after_timer_callback);
+ dispatch_source_set_timer(ds, when, DISPATCH_TIME_FOREVER, 0);
+ dispatch_resume(ds);
+}
+
+#ifdef __BLOCKS__
+void
+dispatch_after(dispatch_time_t when, dispatch_queue_t queue,
+ dispatch_block_t work)
+{
+ // test before the copy of the block
+ if (when == DISPATCH_TIME_FOREVER) {
+#if DISPATCH_DEBUG
+ DISPATCH_CLIENT_CRASH(
+ "dispatch_after() called with 'when' == infinity");
+#endif
+ return;
+ }
+ dispatch_after_f(when, queue, _dispatch_Block_copy(work),
+ _dispatch_call_block_and_release);
+}
+#endif
+
+#pragma mark -
+#pragma mark dispatch_wakeup
+
+DISPATCH_NOINLINE
+void
+_dispatch_queue_push_list_slow(dispatch_queue_t dq,
+ struct dispatch_object_s *obj)
+{
+ // The queue must be retained before dq_items_head is written in order
+ // to ensure that the reference is still valid when _dispatch_wakeup is
+ // called. Otherwise, if preempted between the assignment to
+ // dq_items_head and _dispatch_wakeup, the blocks submitted to the
+ // queue may release the last reference to the queue when invoked by
+ // _dispatch_queue_drain. <rdar://problem/6932776>
+ _dispatch_retain(dq);
+ dq->dq_items_head = obj;
+ _dispatch_wakeup(dq);
+ _dispatch_release(dq);
+}
+
+// 6618342 Contact the team that owns the Instrument DTrace probe before
+// renaming this symbol
+dispatch_queue_t
+_dispatch_wakeup(dispatch_object_t dou)
+{
+ dispatch_queue_t tq;
+
+ if (slowpath(DISPATCH_OBJECT_SUSPENDED(dou._do))) {
+ return NULL;
+ }
+ if (!dx_probe(dou._do) && !dou._dq->dq_items_tail) {
+ return NULL;
+ }
+
+ // _dispatch_source_invoke() relies on this testing the whole suspend count
+ // word, not just the lock bit. In other words, no point taking the lock
+ // if the source is suspended or canceled.
+ if (!dispatch_atomic_cmpxchg2o(dou._do, do_suspend_cnt, 0,
+ DISPATCH_OBJECT_SUSPEND_LOCK)) {
+#if DISPATCH_COCOA_COMPAT
+ if (dou._dq == &_dispatch_main_q) {
+ _dispatch_queue_wakeup_main();
+ }
+#endif
+ return NULL;
+ }
+ _dispatch_retain(dou._do);
+ tq = dou._do->do_targetq;
+ _dispatch_queue_push(tq, dou._do);
+ return tq; // libdispatch does not need this, but the Instrument DTrace
+ // probe does
+}
+
+#if DISPATCH_COCOA_COMPAT
+DISPATCH_NOINLINE
+void
+_dispatch_queue_wakeup_main(void)
+{
+ kern_return_t kr;
+
+ dispatch_once_f(&_dispatch_main_q_port_pred, NULL,
+ _dispatch_main_q_port_init);
+
+ kr = _dispatch_send_wakeup_main_thread(main_q_port, 0);
+
+ switch (kr) {
+ case MACH_SEND_TIMEOUT:
+ case MACH_SEND_TIMED_OUT:
+ case MACH_SEND_INVALID_DEST:
+ break;
+ default:
+ (void)dispatch_assume_zero(kr);
+ break;
+ }
+
+ _dispatch_safe_fork = false;
+}
+#endif
+
+static bool
+_dispatch_queue_wakeup_global(dispatch_queue_t dq)
+{
+ static dispatch_once_t pred;
+ struct dispatch_root_queue_context_s *qc = dq->do_ctxt;
+ int r;
+
+ if (!dq->dq_items_tail) {
+ return false;
+ }
+
+ _dispatch_safe_fork = false;
+
+ dispatch_debug_queue(dq, __PRETTY_FUNCTION__);
+
+ dispatch_once_f(&pred, NULL, _dispatch_root_queues_init);
+
+#if HAVE_PTHREAD_WORKQUEUES
+#if DISPATCH_ENABLE_THREAD_POOL
+ if (qc->dgq_kworkqueue)
+#endif
+ {
+ if (dispatch_atomic_cmpxchg2o(qc, dgq_pending, 0, 1)) {
+ pthread_workitem_handle_t wh;
+ unsigned int gen_cnt;
+ _dispatch_debug("requesting new worker thread");
+
+ r = pthread_workqueue_additem_np(qc->dgq_kworkqueue,
+ _dispatch_worker_thread2, dq, &wh, &gen_cnt);
+ (void)dispatch_assume_zero(r);
+ } else {
+ _dispatch_debug("work thread request still pending on global "
+ "queue: %p", dq);
+ }
+ goto out;
+ }
+#endif // HAVE_PTHREAD_WORKQUEUES
+#if DISPATCH_ENABLE_THREAD_POOL
+ if (dispatch_semaphore_signal(qc->dgq_thread_mediator)) {
+ goto out;
+ }
+
+ pthread_t pthr;
+ int t_count;
+ do {
+ t_count = qc->dgq_thread_pool_size;
+ if (!t_count) {
+ _dispatch_debug("The thread pool is full: %p", dq);
+ goto out;
+ }
+ } while (!dispatch_atomic_cmpxchg2o(qc, dgq_thread_pool_size, t_count,
+ t_count - 1));
+
+ while ((r = pthread_create(&pthr, NULL, _dispatch_worker_thread, dq))) {
+ if (r != EAGAIN) {
+ (void)dispatch_assume_zero(r);
+ }
+ sleep(1);
+ }
+ r = pthread_detach(pthr);
+ (void)dispatch_assume_zero(r);
+#endif // DISPATCH_ENABLE_THREAD_POOL
+
+out:
+ return false;
+}
+
+#pragma mark -
+#pragma mark dispatch_queue_drain
+
+// 6618342 Contact the team that owns the Instrument DTrace probe before
+// renaming this symbol
+DISPATCH_NOINLINE
+void
+_dispatch_queue_invoke(dispatch_queue_t dq)
+{
+ if (!slowpath(DISPATCH_OBJECT_SUSPENDED(dq)) &&
+ fastpath(dispatch_atomic_cmpxchg2o(dq, dq_running, 0, 1))) {
+ dispatch_atomic_acquire_barrier();
+ dispatch_queue_t otq = dq->do_targetq, tq = NULL;
+ _dispatch_queue_drain(dq);
+ if (dq->do_vtable->do_invoke) {
+ // Assume that object invoke checks it is executing on correct queue
+ tq = dx_invoke(dq);
+ } else if (slowpath(otq != dq->do_targetq)) {
+ // An item on the queue changed the target queue
+ tq = dq->do_targetq;
+ }
+ // We do not need to check the result.
+ // When the suspend-count lock is dropped, then the check will happen.
+ dispatch_atomic_release_barrier();
+ (void)dispatch_atomic_dec2o(dq, dq_running);
+ if (tq) {
+ return _dispatch_queue_push(tq, dq);
+ }
+ }
+
+ dq->do_next = DISPATCH_OBJECT_LISTLESS;
+ if (!dispatch_atomic_sub2o(dq, do_suspend_cnt,
+ DISPATCH_OBJECT_SUSPEND_LOCK)) {
+ if (dq->dq_running == 0) {
+ _dispatch_wakeup(dq); // verify that the queue is idle
+ }
+ }
+ _dispatch_release(dq); // added when the queue is put on the list
+}
+
+static void
+_dispatch_queue_drain(dispatch_queue_t dq)
+{
+ dispatch_queue_t orig_tq, old_dq;
+ old_dq = _dispatch_thread_getspecific(dispatch_queue_key);
+ struct dispatch_object_s *dc = NULL, *next_dc = NULL;
+
+ // Continue draining sources after target queue change rdar://8928171
+ bool check_tq = (dx_type(dq) != DISPATCH_SOURCE_KEVENT_TYPE);
+
+ orig_tq = dq->do_targetq;
+
+ _dispatch_thread_setspecific(dispatch_queue_key, dq);
+ //dispatch_debug_queue(dq, __PRETTY_FUNCTION__);
+
+ while (dq->dq_items_tail) {
+ while (!(dc = fastpath(dq->dq_items_head))) {
+ _dispatch_hardware_pause();
+ }
+ dq->dq_items_head = NULL;
+ do {
+ next_dc = fastpath(dc->do_next);
+ if (!next_dc &&
+ !dispatch_atomic_cmpxchg2o(dq, dq_items_tail, dc, NULL)) {
+ // Enqueue is TIGHTLY controlled, we won't wait long.
+ while (!(next_dc = fastpath(dc->do_next))) {
+ _dispatch_hardware_pause();
+ }
+ }
+ if (DISPATCH_OBJECT_SUSPENDED(dq)) {
+ goto out;
+ }
+ if (dq->dq_running > dq->dq_width) {
+ goto out;
+ }
+ if (slowpath(orig_tq != dq->do_targetq) && check_tq) {
+ goto out;
+ }
+ if (fastpath(dq->dq_width == 1)) {
+ _dispatch_continuation_pop(dc);
+ _dispatch_workitem_inc();
+ } else if (!DISPATCH_OBJ_IS_VTABLE(dc) &&
+ (long)dc->do_vtable & DISPATCH_OBJ_BARRIER_BIT) {
+ if (dq->dq_running > 1) {
+ goto out;
+ }
+ _dispatch_continuation_pop(dc);
+ _dispatch_workitem_inc();
+ } else {
+ _dispatch_continuation_redirect(dq, dc);
+ }
+ } while ((dc = next_dc));
+ }
+
+out:
+ // if this is not a complete drain, we must undo some things
+ if (slowpath(dc)) {
+ // 'dc' must NOT be "popped"
+ // 'dc' might be the last item
+ if (!next_dc &&
+ !dispatch_atomic_cmpxchg2o(dq, dq_items_tail, NULL, dc)) {
+ // wait for enqueue slow path to finish
+ while (!(next_dc = fastpath(dq->dq_items_head))) {
+ _dispatch_hardware_pause();
+ }
+ dc->do_next = next_dc;
+ }
+ dq->dq_items_head = dc;
+ }
+
+ _dispatch_thread_setspecific(dispatch_queue_key, old_dq);
+}
+
+static void
+_dispatch_queue_serial_drain_till_empty(dispatch_queue_t dq)
+{
+#if DISPATCH_PERF_MON
+ uint64_t start = _dispatch_absolute_time();
+#endif
+ _dispatch_queue_drain(dq);
+#if DISPATCH_PERF_MON
+ _dispatch_queue_merge_stats(start);
+#endif
+ _dispatch_force_cache_cleanup();
+}
+
+#if DISPATCH_COCOA_COMPAT
+void
+_dispatch_main_queue_drain(void)
+{
+ dispatch_queue_t dq = &_dispatch_main_q;
+ if (!dq->dq_items_tail) {
+ return;
+ }
+ struct dispatch_main_queue_drain_marker_s {
+ DISPATCH_CONTINUATION_HEADER(dispatch_main_queue_drain_marker_s);
+ } marker = {
+ .do_vtable = NULL,
+ };
+ struct dispatch_object_s *dmarker = (void*)▮
+ _dispatch_queue_push_notrace(dq, dmarker);
+
+#if DISPATCH_PERF_MON
+ uint64_t start = _dispatch_absolute_time();
+#endif
+ dispatch_queue_t old_dq = _dispatch_thread_getspecific(dispatch_queue_key);
+ _dispatch_thread_setspecific(dispatch_queue_key, dq);
+
+ struct dispatch_object_s *dc = NULL, *next_dc = NULL;
+ while (dq->dq_items_tail) {
+ while (!(dc = fastpath(dq->dq_items_head))) {
+ _dispatch_hardware_pause();
+ }
+ dq->dq_items_head = NULL;
+ do {
+ next_dc = fastpath(dc->do_next);
+ if (!next_dc &&
+ !dispatch_atomic_cmpxchg2o(dq, dq_items_tail, dc, NULL)) {
+ // Enqueue is TIGHTLY controlled, we won't wait long.
+ while (!(next_dc = fastpath(dc->do_next))) {
+ _dispatch_hardware_pause();
+ }
+ }
+ if (dc == dmarker) {
+ if (next_dc) {
+ dq->dq_items_head = next_dc;
+ _dispatch_queue_wakeup_main();
+ }
+ goto out;
+ }
+ _dispatch_continuation_pop(dc);
+ _dispatch_workitem_inc();
+ } while ((dc = next_dc));
+ }
+ dispatch_assert(dc); // did not encounter marker
+
+out:
+ _dispatch_thread_setspecific(dispatch_queue_key, old_dq);
+#if DISPATCH_PERF_MON
+ _dispatch_queue_merge_stats(start);
+#endif
+ _dispatch_force_cache_cleanup();
+}
+#endif
+
+DISPATCH_ALWAYS_INLINE_NDEBUG
+static inline _dispatch_thread_semaphore_t
+_dispatch_queue_drain_one_barrier_sync(dispatch_queue_t dq)
+{
+ // rdar://problem/8290662 "lock transfer"
+ struct dispatch_object_s *dc, *next_dc;
+
+ // queue is locked, or suspended and not being drained
+ dc = dq->dq_items_head;
+ if (slowpath(!dc) || DISPATCH_OBJ_IS_VTABLE(dc) || ((long)dc->do_vtable &
+ (DISPATCH_OBJ_BARRIER_BIT | DISPATCH_OBJ_SYNC_SLOW_BIT)) !=
+ (DISPATCH_OBJ_BARRIER_BIT | DISPATCH_OBJ_SYNC_SLOW_BIT)) {
+ return 0;
+ }
+ // dequeue dc, it is a barrier sync
+ next_dc = fastpath(dc->do_next);
+ dq->dq_items_head = next_dc;
+ if (!next_dc && !dispatch_atomic_cmpxchg2o(dq, dq_items_tail, dc, NULL)) {
+ // Enqueue is TIGHTLY controlled, we won't wait long.
+ while (!(next_dc = fastpath(dc->do_next))) {
+ _dispatch_hardware_pause();
+ }
+ dq->dq_items_head = next_dc;
+ }
+ _dispatch_trace_continuation_pop(dq, dc);
+ _dispatch_workitem_inc();
+
+ struct dispatch_barrier_sync_slow_s *dbssp = (void *)dc;
+ struct dispatch_barrier_sync_slow2_s *dbss2p = dbssp->dc_ctxt;
+ return dbss2p->dbss2_sema;
+}
+
+static struct dispatch_object_s *
+_dispatch_queue_concurrent_drain_one(dispatch_queue_t dq)
+{
+ struct dispatch_object_s *head, *next, *const mediator = (void *)~0ul;
+
+ // The mediator value acts both as a "lock" and a signal
+ head = dispatch_atomic_xchg2o(dq, dq_items_head, mediator);
+
+ if (slowpath(head == NULL)) {
+ // The first xchg on the tail will tell the enqueueing thread that it
+ // is safe to blindly write out to the head pointer. A cmpxchg honors
+ // the algorithm.
+ (void)dispatch_atomic_cmpxchg2o(dq, dq_items_head, mediator, NULL);
+ _dispatch_debug("no work on global work queue");
+ return NULL;
+ }
+
+ if (slowpath(head == mediator)) {
+ // This thread lost the race for ownership of the queue.
+ //
+ // The ratio of work to libdispatch overhead must be bad. This
+ // scenario implies that there are too many threads in the pool.
+ // Create a new pending thread and then exit this thread.
+ // The kernel will grant a new thread when the load subsides.
+ _dispatch_debug("Contention on queue: %p", dq);
+ _dispatch_queue_wakeup_global(dq);
+#if DISPATCH_PERF_MON
+ dispatch_atomic_inc(&_dispatch_bad_ratio);
+#endif
+ return NULL;
+ }
+
+ // Restore the head pointer to a sane value before returning.
+ // If 'next' is NULL, then this item _might_ be the last item.
+ next = fastpath(head->do_next);
+
+ if (slowpath(!next)) {
+ dq->dq_items_head = NULL;
+
+ if (dispatch_atomic_cmpxchg2o(dq, dq_items_tail, head, NULL)) {
+ // both head and tail are NULL now
+ goto out;
+ }
+
+ // There must be a next item now. This thread won't wait long.
+ while (!(next = head->do_next)) {
+ _dispatch_hardware_pause();
+ }
+ }
+
+ dq->dq_items_head = next;
+ _dispatch_queue_wakeup_global(dq);
+out:
+ return head;
+}
+
+#pragma mark -
+#pragma mark dispatch_worker_thread
+
+// 6618342 Contact the team that owns the Instrument DTrace probe before
+// renaming this symbol
+static void
+_dispatch_worker_thread2(void *context)
+{
+ struct dispatch_object_s *item;
+ dispatch_queue_t dq = context;
+ struct dispatch_root_queue_context_s *qc = dq->do_ctxt;
+
+
+ if (_dispatch_thread_getspecific(dispatch_queue_key)) {
+ DISPATCH_CRASH("Premature thread recycling");
+ }
+
+ _dispatch_thread_setspecific(dispatch_queue_key, dq);
+ qc->dgq_pending = 0;
+
+#if DISPATCH_COCOA_COMPAT
+ (void)dispatch_atomic_inc(&_dispatch_worker_threads);
+ // ensure that high-level memory management techniques do not leak/crash
+ if (dispatch_begin_thread_4GC) {
+ dispatch_begin_thread_4GC();
+ }
+ void *pool = _dispatch_begin_NSAutoReleasePool();
+#endif
+
+#if DISPATCH_PERF_MON
+ uint64_t start = _dispatch_absolute_time();
+#endif
+ while ((item = fastpath(_dispatch_queue_concurrent_drain_one(dq)))) {
+ _dispatch_continuation_pop(item);
+ }
+#if DISPATCH_PERF_MON
+ _dispatch_queue_merge_stats(start);
+#endif
+
+#if DISPATCH_COCOA_COMPAT
+ _dispatch_end_NSAutoReleasePool(pool);
+ dispatch_end_thread_4GC();
+ if (!dispatch_atomic_dec(&_dispatch_worker_threads) &&
+ dispatch_no_worker_threads_4GC) {
+ dispatch_no_worker_threads_4GC();
+ }
+#endif
+
+ _dispatch_thread_setspecific(dispatch_queue_key, NULL);
+
+ _dispatch_force_cache_cleanup();
+
+}
+
+#if DISPATCH_ENABLE_THREAD_POOL
+// 6618342 Contact the team that owns the Instrument DTrace probe before
+// renaming this symbol
+static void *
+_dispatch_worker_thread(void *context)
+{
+ dispatch_queue_t dq = context;
+ struct dispatch_root_queue_context_s *qc = dq->do_ctxt;
+ sigset_t mask;
+ int r;
+
+ // workaround tweaks the kernel workqueue does for us
+ r = sigfillset(&mask);
+ (void)dispatch_assume_zero(r);
+ r = _dispatch_pthread_sigmask(SIG_BLOCK, &mask, NULL);
+ (void)dispatch_assume_zero(r);
+
+ do {
+ _dispatch_worker_thread2(context);
+ // we use 65 seconds in case there are any timers that run once a minute
+ } while (dispatch_semaphore_wait(qc->dgq_thread_mediator,
+ dispatch_time(0, 65ull * NSEC_PER_SEC)) == 0);
+
+ (void)dispatch_atomic_inc2o(qc, dgq_thread_pool_size);
+ if (dq->dq_items_tail) {
+ _dispatch_queue_wakeup_global(dq);
+ }
+
+ return NULL;
}
int
@@ -1863,135 +2335,487 @@
return pthread_sigmask(how, set, oset);
}
-
-bool _dispatch_safe_fork = true;
-
-void
-dispatch_atfork_prepare(void)
-{
-}
-
-void
-dispatch_atfork_parent(void)
-{
-}
-
-void
-dispatch_atfork_child(void)
-{
- void *crash = (void *)0x100;
- size_t i;
-
- if (_dispatch_safe_fork) {
- return;
- }
-
- _dispatch_main_q.dq_items_head = crash;
- _dispatch_main_q.dq_items_tail = crash;
-
- _dispatch_mgr_q.dq_items_head = crash;
- _dispatch_mgr_q.dq_items_tail = crash;
-
- for (i = 0; i < DISPATCH_ROOT_QUEUE_COUNT; i++) {
- _dispatch_root_queues[i].dq_items_head = crash;
- _dispatch_root_queues[i].dq_items_tail = crash;
- }
-}
-
-void
-dispatch_init_pthread(pthread_t pthr __attribute__((unused)))
-{
-}
-
-const struct dispatch_queue_offsets_s dispatch_queue_offsets = {
- .dqo_version = 3,
- .dqo_label = offsetof(struct dispatch_queue_s, dq_label),
- .dqo_label_size = sizeof(_dispatch_main_q.dq_label),
- .dqo_flags = 0,
- .dqo_flags_size = 0,
- .dqo_width = offsetof(struct dispatch_queue_s, dq_width),
- .dqo_width_size = sizeof(_dispatch_main_q.dq_width),
- .dqo_serialnum = offsetof(struct dispatch_queue_s, dq_serialnum),
- .dqo_serialnum_size = sizeof(_dispatch_main_q.dq_serialnum),
- .dqo_running = offsetof(struct dispatch_queue_s, dq_running),
- .dqo_running_size = sizeof(_dispatch_main_q.dq_running),
-};
-
-#ifdef __BLOCKS__
-void
-dispatch_after(dispatch_time_t when, dispatch_queue_t queue, dispatch_block_t work)
-{
- // test before the copy of the block
- if (when == DISPATCH_TIME_FOREVER) {
-#if DISPATCH_DEBUG
- DISPATCH_CLIENT_CRASH("dispatch_after() called with 'when' == infinity");
-#endif
- return;
- }
- dispatch_after_f(when, queue, _dispatch_Block_copy(work), _dispatch_call_block_and_release);
-}
#endif
-struct _dispatch_after_time_s {
- void *datc_ctxt;
- void (*datc_func)(void *);
- dispatch_source_t ds;
-};
+#pragma mark -
+#pragma mark dispatch_main_queue
+static bool _dispatch_program_is_probably_callback_driven;
+
+#if DISPATCH_COCOA_COMPAT
static void
-_dispatch_after_timer_cancel(void *ctxt)
+_dispatch_main_q_port_init(void *ctxt DISPATCH_UNUSED)
{
- struct _dispatch_after_time_s *datc = ctxt;
- dispatch_source_t ds = datc->ds;
+ kern_return_t kr;
- free(datc);
- dispatch_release(ds); // MUST NOT be _dispatch_release()
+ kr = mach_port_allocate(mach_task_self(), MACH_PORT_RIGHT_RECEIVE,
+ &main_q_port);
+ DISPATCH_VERIFY_MIG(kr);
+ (void)dispatch_assume_zero(kr);
+ kr = mach_port_insert_right(mach_task_self(), main_q_port, main_q_port,
+ MACH_MSG_TYPE_MAKE_SEND);
+ DISPATCH_VERIFY_MIG(kr);
+ (void)dispatch_assume_zero(kr);
+
+ _dispatch_program_is_probably_callback_driven = true;
+ _dispatch_safe_fork = false;
}
-static void
-_dispatch_after_timer_callback(void *ctxt)
+mach_port_t
+_dispatch_get_main_queue_port_4CF(void)
{
- struct _dispatch_after_time_s *datc = ctxt;
+ dispatch_once_f(&_dispatch_main_q_port_pred, NULL,
+ _dispatch_main_q_port_init);
+ return main_q_port;
+}
- dispatch_assert(datc->datc_func);
- datc->datc_func(datc->datc_ctxt);
+static bool main_q_is_draining;
- dispatch_source_cancel(datc->ds);
+// 6618342 Contact the team that owns the Instrument DTrace probe before
+// renaming this symbol
+DISPATCH_NOINLINE
+static void
+_dispatch_queue_set_mainq_drain_state(bool arg)
+{
+ main_q_is_draining = arg;
+}
+
+void
+_dispatch_main_queue_callback_4CF(mach_msg_header_t *msg DISPATCH_UNUSED)
+{
+ if (main_q_is_draining) {
+ return;
+ }
+ _dispatch_queue_set_mainq_drain_state(true);
+ _dispatch_main_queue_drain();
+ _dispatch_queue_set_mainq_drain_state(false);
+}
+
+#endif
+
+void
+dispatch_main(void)
+{
+#if HAVE_PTHREAD_MAIN_NP
+ if (pthread_main_np()) {
+#endif
+ _dispatch_program_is_probably_callback_driven = true;
+ pthread_exit(NULL);
+ DISPATCH_CRASH("pthread_exit() returned");
+#if HAVE_PTHREAD_MAIN_NP
+ }
+ DISPATCH_CLIENT_CRASH("dispatch_main() must be called on the main thread");
+#endif
+}
+
+DISPATCH_NOINLINE DISPATCH_NORETURN
+static void
+_dispatch_sigsuspend(void)
+{
+ static const sigset_t mask;
+
+#if DISPATCH_COCOA_COMPAT
+ // Do not count the signal handling thread as a worker thread
+ (void)dispatch_atomic_dec(&_dispatch_worker_threads);
+#endif
+ for (;;) {
+ sigsuspend(&mask);
+ }
+}
+
+DISPATCH_NORETURN
+static void
+_dispatch_sig_thread(void *ctxt DISPATCH_UNUSED)
+{
+ // never returns, so burn bridges behind us
+ _dispatch_clear_stack(0);
+ _dispatch_sigsuspend();
}
DISPATCH_NOINLINE
-void
-dispatch_after_f(dispatch_time_t when, dispatch_queue_t queue, void *ctxt, void (*func)(void *))
+static void
+_dispatch_queue_cleanup2(void)
{
- uint64_t delta;
- struct _dispatch_after_time_s *datc = NULL;
- dispatch_source_t ds = NULL;
+ (void)dispatch_atomic_dec(&_dispatch_main_q.dq_running);
- if (when == DISPATCH_TIME_FOREVER) {
-#if DISPATCH_DEBUG
- DISPATCH_CLIENT_CRASH("dispatch_after_f() called with 'when' == infinity");
+ if (dispatch_atomic_sub(&_dispatch_main_q.do_suspend_cnt,
+ DISPATCH_OBJECT_SUSPEND_LOCK) == 0) {
+ _dispatch_wakeup(&_dispatch_main_q);
+ }
+
+ // overload the "probably" variable to mean that dispatch_main() or
+ // similar non-POSIX API was called
+ // this has to run before the DISPATCH_COCOA_COMPAT below
+ if (_dispatch_program_is_probably_callback_driven) {
+ dispatch_async_f(_dispatch_get_root_queue(0, false), NULL,
+ _dispatch_sig_thread);
+ sleep(1); // workaround 6778970
+ }
+
+#if DISPATCH_COCOA_COMPAT
+ dispatch_once_f(&_dispatch_main_q_port_pred, NULL,
+ _dispatch_main_q_port_init);
+
+ mach_port_t mp = main_q_port;
+ kern_return_t kr;
+
+ main_q_port = 0;
+
+ if (mp) {
+ kr = mach_port_deallocate(mach_task_self(), mp);
+ DISPATCH_VERIFY_MIG(kr);
+ (void)dispatch_assume_zero(kr);
+ kr = mach_port_mod_refs(mach_task_self(), mp, MACH_PORT_RIGHT_RECEIVE,
+ -1);
+ DISPATCH_VERIFY_MIG(kr);
+ (void)dispatch_assume_zero(kr);
+ }
#endif
- return;
+}
+
+static void
+_dispatch_queue_cleanup(void *ctxt)
+{
+ if (ctxt == &_dispatch_main_q) {
+ return _dispatch_queue_cleanup2();
+ }
+ // POSIX defines that destructors are only called if 'ctxt' is non-null
+ DISPATCH_CRASH("Premature thread exit while a dispatch queue is running");
+}
+
+#pragma mark -
+#pragma mark dispatch_manager_queue
+
+static unsigned int _dispatch_select_workaround;
+static fd_set _dispatch_rfds;
+static fd_set _dispatch_wfds;
+static void **_dispatch_rfd_ptrs;
+static void **_dispatch_wfd_ptrs;
+
+static int _dispatch_kq;
+
+static void
+_dispatch_get_kq_init(void *context DISPATCH_UNUSED)
+{
+ static const struct kevent kev = {
+ .ident = 1,
+ .filter = EVFILT_USER,
+ .flags = EV_ADD|EV_CLEAR,
+ };
+
+ _dispatch_kq = kqueue();
+
+ _dispatch_safe_fork = false;
+
+ if (_dispatch_kq == -1) {
+ DISPATCH_CLIENT_CRASH("kqueue() create failed: "
+ "probably out of file descriptors");
+ } else if (dispatch_assume(_dispatch_kq < FD_SETSIZE)) {
+ // in case we fall back to select()
+ FD_SET(_dispatch_kq, &_dispatch_rfds);
}
- delta = _dispatch_timeout(when);
- if (delta == 0) {
- return dispatch_async_f(queue, ctxt, func);
+ (void)dispatch_assume_zero(kevent(_dispatch_kq, &kev, 1, NULL, 0, NULL));
+
+ _dispatch_queue_push(_dispatch_mgr_q.do_targetq, &_dispatch_mgr_q);
+}
+
+static int
+_dispatch_get_kq(void)
+{
+ static dispatch_once_t pred;
+
+ dispatch_once_f(&pred, NULL, _dispatch_get_kq_init);
+
+ return _dispatch_kq;
+}
+
+long
+_dispatch_update_kq(const struct kevent *kev)
+{
+ struct kevent kev_copy = *kev;
+ // This ensures we don't get a pending kevent back while registering
+ // a new kevent
+ kev_copy.flags |= EV_RECEIPT;
+
+ if (_dispatch_select_workaround && (kev_copy.flags & EV_DELETE)) {
+ // Only executed on manager queue
+ switch (kev_copy.filter) {
+ case EVFILT_READ:
+ if (kev_copy.ident < FD_SETSIZE &&
+ FD_ISSET((int)kev_copy.ident, &_dispatch_rfds)) {
+ FD_CLR((int)kev_copy.ident, &_dispatch_rfds);
+ _dispatch_rfd_ptrs[kev_copy.ident] = 0;
+ (void)dispatch_atomic_dec(&_dispatch_select_workaround);
+ return 0;
+ }
+ break;
+ case EVFILT_WRITE:
+ if (kev_copy.ident < FD_SETSIZE &&
+ FD_ISSET((int)kev_copy.ident, &_dispatch_wfds)) {
+ FD_CLR((int)kev_copy.ident, &_dispatch_wfds);
+ _dispatch_wfd_ptrs[kev_copy.ident] = 0;
+ (void)dispatch_atomic_dec(&_dispatch_select_workaround);
+ return 0;
+ }
+ break;
+ default:
+ break;
+ }
}
- // this function should be optimized to not use a dispatch source
- ds = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, queue);
- dispatch_assert(ds);
+ int rval = kevent(_dispatch_get_kq(), &kev_copy, 1, &kev_copy, 1, NULL);
+ if (rval == -1) {
+ // If we fail to register with kevents, for other reasons aside from
+ // changelist elements.
+ (void)dispatch_assume_zero(errno);
+ //kev_copy.flags |= EV_ERROR;
+ //kev_copy.data = error;
+ return errno;
+ }
- datc = malloc(sizeof(struct _dispatch_after_time_s));
- dispatch_assert(datc);
- datc->datc_ctxt = ctxt;
- datc->datc_func = func;
- datc->ds = ds;
+ // The following select workaround only applies to adding kevents
+ if ((kev->flags & (EV_DISABLE|EV_DELETE)) ||
+ !(kev->flags & (EV_ADD|EV_ENABLE))) {
+ return 0;
+ }
- dispatch_set_context(ds, datc);
- dispatch_source_set_event_handler_f(ds, _dispatch_after_timer_callback);
- dispatch_source_set_cancel_handler_f(ds, _dispatch_after_timer_cancel);
- dispatch_source_set_timer(ds, when, 0, 0);
- dispatch_resume(ds);
+ // Only executed on manager queue
+ switch (kev_copy.data) {
+ case 0:
+ return 0;
+ case EBADF:
+ break;
+ default:
+ // If an error occurred while registering with kevent, and it was
+ // because of a kevent changelist processing && the kevent involved
+ // either doing a read or write, it would indicate we were trying
+ // to register a /dev/* port; fall back to select
+ switch (kev_copy.filter) {
+ case EVFILT_READ:
+ if (dispatch_assume(kev_copy.ident < FD_SETSIZE)) {
+ if (!_dispatch_rfd_ptrs) {
+ _dispatch_rfd_ptrs = calloc(FD_SETSIZE, sizeof(void*));
+ }
+ _dispatch_rfd_ptrs[kev_copy.ident] = kev_copy.udata;
+ FD_SET((int)kev_copy.ident, &_dispatch_rfds);
+ (void)dispatch_atomic_inc(&_dispatch_select_workaround);
+ _dispatch_debug("select workaround used to read fd %d: 0x%lx",
+ (int)kev_copy.ident, (long)kev_copy.data);
+ return 0;
+ }
+ break;
+ case EVFILT_WRITE:
+ if (dispatch_assume(kev_copy.ident < FD_SETSIZE)) {
+ if (!_dispatch_wfd_ptrs) {
+ _dispatch_wfd_ptrs = calloc(FD_SETSIZE, sizeof(void*));
+ }
+ _dispatch_wfd_ptrs[kev_copy.ident] = kev_copy.udata;
+ FD_SET((int)kev_copy.ident, &_dispatch_wfds);
+ (void)dispatch_atomic_inc(&_dispatch_select_workaround);
+ _dispatch_debug("select workaround used to write fd %d: 0x%lx",
+ (int)kev_copy.ident, (long)kev_copy.data);
+ return 0;
+ }
+ break;
+ default:
+ // kevent error, _dispatch_source_merge_kevent() will handle it
+ _dispatch_source_drain_kevent(&kev_copy);
+ break;
+ }
+ break;
+ }
+ return kev_copy.data;
+}
+
+static bool
+_dispatch_mgr_wakeup(dispatch_queue_t dq)
+{
+ static const struct kevent kev = {
+ .ident = 1,
+ .filter = EVFILT_USER,
+ .fflags = NOTE_TRIGGER,
+ };
+
+ _dispatch_debug("waking up the _dispatch_mgr_q: %p", dq);
+
+ _dispatch_update_kq(&kev);
+
+ return false;
+}
+
+static void
+_dispatch_mgr_thread2(struct kevent *kev, size_t cnt)
+{
+ size_t i;
+
+ for (i = 0; i < cnt; i++) {
+ // EVFILT_USER isn't used by sources
+ if (kev[i].filter == EVFILT_USER) {
+ // If _dispatch_mgr_thread2() ever is changed to return to the
+ // caller, then this should become _dispatch_queue_drain()
+ _dispatch_queue_serial_drain_till_empty(&_dispatch_mgr_q);
+ } else {
+ _dispatch_source_drain_kevent(&kev[i]);
+ }
+ }
+}
+
+#if DISPATCH_USE_VM_PRESSURE && DISPATCH_USE_MALLOC_VM_PRESSURE_SOURCE
+// VM Pressure source for malloc <rdar://problem/7805121>
+static dispatch_source_t _dispatch_malloc_vm_pressure_source;
+
+static void
+_dispatch_malloc_vm_pressure_handler(void *context DISPATCH_UNUSED)
+{
+ malloc_zone_pressure_relief(0,0);
+}
+
+static void
+_dispatch_malloc_vm_pressure_setup(void)
+{
+ _dispatch_malloc_vm_pressure_source = dispatch_source_create(
+ DISPATCH_SOURCE_TYPE_VM, 0, DISPATCH_VM_PRESSURE,
+ _dispatch_get_root_queue(0, true));
+ dispatch_source_set_event_handler_f(_dispatch_malloc_vm_pressure_source,
+ _dispatch_malloc_vm_pressure_handler);
+ dispatch_resume(_dispatch_malloc_vm_pressure_source);
+}
+#else
+#define _dispatch_malloc_vm_pressure_setup()
+#endif
+
+DISPATCH_NOINLINE DISPATCH_NORETURN
+static void
+_dispatch_mgr_invoke(void)
+{
+ static const struct timespec timeout_immediately = { 0, 0 };
+ struct timespec timeout;
+ const struct timespec *timeoutp;
+ struct timeval sel_timeout, *sel_timeoutp;
+ fd_set tmp_rfds, tmp_wfds;
+ struct kevent kev[1];
+ int k_cnt, err, i, r;
+
+ _dispatch_thread_setspecific(dispatch_queue_key, &_dispatch_mgr_q);
+#if DISPATCH_COCOA_COMPAT
+ // Do not count the manager thread as a worker thread
+ (void)dispatch_atomic_dec(&_dispatch_worker_threads);
+#endif
+ _dispatch_malloc_vm_pressure_setup();
+
+ for (;;) {
+ _dispatch_run_timers();
+
+ timeoutp = _dispatch_get_next_timer_fire(&timeout);
+
+ if (_dispatch_select_workaround) {
+ FD_COPY(&_dispatch_rfds, &tmp_rfds);
+ FD_COPY(&_dispatch_wfds, &tmp_wfds);
+ if (timeoutp) {
+ sel_timeout.tv_sec = timeoutp->tv_sec;
+ sel_timeout.tv_usec = (typeof(sel_timeout.tv_usec))
+ (timeoutp->tv_nsec / 1000u);
+ sel_timeoutp = &sel_timeout;
+ } else {
+ sel_timeoutp = NULL;
+ }
+
+ r = select(FD_SETSIZE, &tmp_rfds, &tmp_wfds, NULL, sel_timeoutp);
+ if (r == -1) {
+ err = errno;
+ if (err != EBADF) {
+ if (err != EINTR) {
+ (void)dispatch_assume_zero(err);
+ }
+ continue;
+ }
+ for (i = 0; i < FD_SETSIZE; i++) {
+ if (i == _dispatch_kq) {
+ continue;
+ }
+ if (!FD_ISSET(i, &_dispatch_rfds) && !FD_ISSET(i,
+ &_dispatch_wfds)) {
+ continue;
+ }
+ r = dup(i);
+ if (r != -1) {
+ close(r);
+ } else {
+ if (FD_ISSET(i, &_dispatch_rfds)) {
+ FD_CLR(i, &_dispatch_rfds);
+ _dispatch_rfd_ptrs[i] = 0;
+ (void)dispatch_atomic_dec(
+ &_dispatch_select_workaround);
+ }
+ if (FD_ISSET(i, &_dispatch_wfds)) {
+ FD_CLR(i, &_dispatch_wfds);
+ _dispatch_wfd_ptrs[i] = 0;
+ (void)dispatch_atomic_dec(
+ &_dispatch_select_workaround);
+ }
+ }
+ }
+ continue;
+ }
+
+ if (r > 0) {
+ for (i = 0; i < FD_SETSIZE; i++) {
+ if (i == _dispatch_kq) {
+ continue;
+ }
+ if (FD_ISSET(i, &tmp_rfds)) {
+ FD_CLR(i, &_dispatch_rfds); // emulate EV_DISABLE
+ EV_SET(&kev[0], i, EVFILT_READ,
+ EV_ADD|EV_ENABLE|EV_DISPATCH, 0, 1,
+ _dispatch_rfd_ptrs[i]);
+ _dispatch_rfd_ptrs[i] = 0;
+ (void)dispatch_atomic_dec(&_dispatch_select_workaround);
+ _dispatch_mgr_thread2(kev, 1);
+ }
+ if (FD_ISSET(i, &tmp_wfds)) {
+ FD_CLR(i, &_dispatch_wfds); // emulate EV_DISABLE
+ EV_SET(&kev[0], i, EVFILT_WRITE,
+ EV_ADD|EV_ENABLE|EV_DISPATCH, 0, 1,
+ _dispatch_wfd_ptrs[i]);
+ _dispatch_wfd_ptrs[i] = 0;
+ (void)dispatch_atomic_dec(&_dispatch_select_workaround);
+ _dispatch_mgr_thread2(kev, 1);
+ }
+ }
+ }
+
+ timeoutp = &timeout_immediately;
+ }
+
+ k_cnt = kevent(_dispatch_kq, NULL, 0, kev, sizeof(kev) / sizeof(kev[0]),
+ timeoutp);
+ err = errno;
+
+ switch (k_cnt) {
+ case -1:
+ if (err == EBADF) {
+ DISPATCH_CLIENT_CRASH("Do not close random Unix descriptors");
+ }
+ if (err != EINTR) {
+ (void)dispatch_assume_zero(err);
+ }
+ continue;
+ default:
+ _dispatch_mgr_thread2(kev, (size_t)k_cnt);
+ // fall through
+ case 0:
+ _dispatch_force_cache_cleanup();
+ continue;
+ }
+ }
+}
+
+DISPATCH_NORETURN
+static dispatch_queue_t
+_dispatch_mgr_thread(dispatch_queue_t dq DISPATCH_UNUSED)
+{
+ // never returns, so burn bridges behind us & clear stack 2k ahead
+ _dispatch_clear_stack(2048);
+ _dispatch_mgr_invoke();
}
diff --git a/src/queue_internal.h b/src/queue_internal.h
index 858a556..479ae60 100644
--- a/src/queue_internal.h
+++ b/src/queue_internal.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -35,81 +35,83 @@
// If dc_vtable is less than 127, then the object is a continuation.
// Otherwise, the object has a private layout and memory management rules. The
// first two words must align with normal objects.
-#define DISPATCH_CONTINUATION_HEADER(x) \
- const void * do_vtable; \
- struct x *volatile do_next; \
- dispatch_function_t dc_func; \
- void * dc_ctxt
+#define DISPATCH_CONTINUATION_HEADER(x) \
+ const void *do_vtable; \
+ struct x *volatile do_next; \
+ dispatch_function_t dc_func; \
+ void *dc_ctxt
-#define DISPATCH_OBJ_ASYNC_BIT 0x1
+#define DISPATCH_OBJ_ASYNC_BIT 0x1
#define DISPATCH_OBJ_BARRIER_BIT 0x2
-#define DISPATCH_OBJ_GROUP_BIT 0x4
+#define DISPATCH_OBJ_GROUP_BIT 0x4
+#define DISPATCH_OBJ_SYNC_SLOW_BIT 0x8
// vtables are pointers far away from the low page in memory
-#define DISPATCH_OBJ_IS_VTABLE(x) ((unsigned long)(x)->do_vtable > 127ul)
+#define DISPATCH_OBJ_IS_VTABLE(x) ((unsigned long)(x)->do_vtable > 127ul)
struct dispatch_continuation_s {
DISPATCH_CONTINUATION_HEADER(dispatch_continuation_s);
- dispatch_group_t dc_group;
- void * dc_data[3];
+ dispatch_group_t dc_group;
+ void *dc_data[3];
};
typedef struct dispatch_continuation_s *dispatch_continuation_t;
+struct dispatch_queue_attr_vtable_s {
+ DISPATCH_VTABLE_HEADER(dispatch_queue_attr_s);
+};
+
+struct dispatch_queue_attr_s {
+ DISPATCH_STRUCT_HEADER(dispatch_queue_attr_s, dispatch_queue_attr_vtable_s);
+};
struct dispatch_queue_vtable_s {
DISPATCH_VTABLE_HEADER(dispatch_queue_s);
};
-#define DISPATCH_QUEUE_MIN_LABEL_SIZE 64
+#define DISPATCH_QUEUE_MIN_LABEL_SIZE 64
-#ifndef DISPATCH_NO_LEGACY
-#define DISPATCH_QUEUE_HEADER \
- uint32_t dq_running; \
- uint32_t dq_width; \
- struct dispatch_object_s *dq_items_tail; \
- struct dispatch_object_s *volatile dq_items_head; \
- unsigned long dq_serialnum; \
- void *dq_finalizer_ctxt; \
- dispatch_queue_finalizer_function_t dq_finalizer_func
+#ifdef __LP64__
+#define DISPATCH_QUEUE_CACHELINE_PAD 32
#else
+#define DISPATCH_QUEUE_CACHELINE_PAD 8
+#endif
+
#define DISPATCH_QUEUE_HEADER \
- uint32_t dq_running; \
+ uint32_t volatile dq_running; \
uint32_t dq_width; \
- struct dispatch_object_s *dq_items_tail; \
+ struct dispatch_object_s *volatile dq_items_tail; \
struct dispatch_object_s *volatile dq_items_head; \
unsigned long dq_serialnum; \
- void *dq_finalizer_ctxt;
-#endif
+ dispatch_queue_t dq_specific_q;
struct dispatch_queue_s {
DISPATCH_STRUCT_HEADER(dispatch_queue_s, dispatch_queue_vtable_s);
DISPATCH_QUEUE_HEADER;
- char dq_label[DISPATCH_QUEUE_MIN_LABEL_SIZE]; // must be last
+ char dq_label[DISPATCH_QUEUE_MIN_LABEL_SIZE]; // must be last
+ char _dq_pad[DISPATCH_QUEUE_CACHELINE_PAD]; // for static queues only
};
extern struct dispatch_queue_s _dispatch_mgr_q;
-#define DISPATCH_ROOT_QUEUE_COUNT (DISPATCH_QUEUE_PRIORITY_COUNT * 2)
-extern struct dispatch_queue_s _dispatch_root_queues[];
-
-void _dispatch_queue_init(dispatch_queue_t dq);
-void _dispatch_queue_drain(dispatch_queue_t dq);
void _dispatch_queue_dispose(dispatch_queue_t dq);
-void _dispatch_queue_push_list_slow(dispatch_queue_t dq, struct dispatch_object_s *obj);
-void _dispatch_queue_serial_drain_till_empty(dispatch_queue_t dq);
-void _dispatch_force_cache_cleanup(void);
+void _dispatch_queue_invoke(dispatch_queue_t dq);
+void _dispatch_queue_push_list_slow(dispatch_queue_t dq,
+ struct dispatch_object_s *obj);
-__attribute__((always_inline))
+DISPATCH_ALWAYS_INLINE
static inline void
-_dispatch_queue_push_list(dispatch_queue_t dq, dispatch_object_t _head, dispatch_object_t _tail)
+_dispatch_queue_push_list(dispatch_queue_t dq, dispatch_object_t _head,
+ dispatch_object_t _tail)
{
struct dispatch_object_s *prev, *head = _head._do, *tail = _tail._do;
tail->do_next = NULL;
- prev = fastpath(dispatch_atomic_xchg(&dq->dq_items_tail, tail));
+ dispatch_atomic_store_barrier();
+ prev = fastpath(dispatch_atomic_xchg2o(dq, dq_items_tail, tail));
if (prev) {
- // if we crash here with a value less than 0x1000, then we are at a known bug in client code
- // for example, see _dispatch_queue_dispose or _dispatch_atfork_child
+ // if we crash here with a value less than 0x1000, then we are at a
+ // known bug in client code for example, see _dispatch_queue_dispose
+ // or _dispatch_atfork_child
prev->do_next = head;
} else {
_dispatch_queue_push_list_slow(dq, head);
@@ -118,34 +120,91 @@
#define _dispatch_queue_push(x, y) _dispatch_queue_push_list((x), (y), (y))
-#define DISPATCH_QUEUE_PRIORITY_COUNT 3
-
#if DISPATCH_DEBUG
void dispatch_debug_queue(dispatch_queue_t dq, const char* str);
#else
-static inline void dispatch_debug_queue(dispatch_queue_t dq __attribute__((unused)), const char* str __attribute__((unused))) {}
+static inline void dispatch_debug_queue(dispatch_queue_t dq DISPATCH_UNUSED,
+ const char* str DISPATCH_UNUSED) {}
#endif
size_t dispatch_queue_debug(dispatch_queue_t dq, char* buf, size_t bufsiz);
-size_t dispatch_queue_debug_attr(dispatch_queue_t dq, char* buf, size_t bufsiz);
+size_t _dispatch_queue_debug_attr(dispatch_queue_t dq, char* buf,
+ size_t bufsiz);
+DISPATCH_ALWAYS_INLINE
static inline dispatch_queue_t
_dispatch_queue_get_current(void)
{
return _dispatch_thread_getspecific(dispatch_queue_key);
}
-__private_extern__ malloc_zone_t *_dispatch_ccache_zone;
-dispatch_continuation_t _dispatch_continuation_alloc_from_heap(void);
+#define DISPATCH_QUEUE_PRIORITY_COUNT 4
+#define DISPATCH_ROOT_QUEUE_COUNT (DISPATCH_QUEUE_PRIORITY_COUNT * 2)
-static inline dispatch_continuation_t
-_dispatch_continuation_alloc_cacheonly(void)
+// overcommit priority index values need bit 1 set
+enum {
+ DISPATCH_ROOT_QUEUE_IDX_LOW_PRIORITY = 0,
+ DISPATCH_ROOT_QUEUE_IDX_LOW_OVERCOMMIT_PRIORITY,
+ DISPATCH_ROOT_QUEUE_IDX_DEFAULT_PRIORITY,
+ DISPATCH_ROOT_QUEUE_IDX_DEFAULT_OVERCOMMIT_PRIORITY,
+ DISPATCH_ROOT_QUEUE_IDX_HIGH_PRIORITY,
+ DISPATCH_ROOT_QUEUE_IDX_HIGH_OVERCOMMIT_PRIORITY,
+ DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_PRIORITY,
+ DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_OVERCOMMIT_PRIORITY,
+};
+
+extern const struct dispatch_queue_attr_vtable_s dispatch_queue_attr_vtable;
+extern const struct dispatch_queue_vtable_s _dispatch_queue_vtable;
+extern unsigned long _dispatch_queue_serial_numbers;
+extern struct dispatch_queue_s _dispatch_root_queues[];
+
+DISPATCH_ALWAYS_INLINE DISPATCH_CONST
+static inline dispatch_queue_t
+_dispatch_get_root_queue(long priority, bool overcommit)
{
- dispatch_continuation_t dc = fastpath(_dispatch_thread_getspecific(dispatch_cache_key));
- if (dc) {
- _dispatch_thread_setspecific(dispatch_cache_key, dc->do_next);
+ if (overcommit) switch (priority) {
+ case DISPATCH_QUEUE_PRIORITY_LOW:
+ return &_dispatch_root_queues[
+ DISPATCH_ROOT_QUEUE_IDX_LOW_OVERCOMMIT_PRIORITY];
+ case DISPATCH_QUEUE_PRIORITY_DEFAULT:
+ return &_dispatch_root_queues[
+ DISPATCH_ROOT_QUEUE_IDX_DEFAULT_OVERCOMMIT_PRIORITY];
+ case DISPATCH_QUEUE_PRIORITY_HIGH:
+ return &_dispatch_root_queues[
+ DISPATCH_ROOT_QUEUE_IDX_HIGH_OVERCOMMIT_PRIORITY];
+ case DISPATCH_QUEUE_PRIORITY_BACKGROUND:
+ return &_dispatch_root_queues[
+ DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_OVERCOMMIT_PRIORITY];
}
- return dc;
+ switch (priority) {
+ case DISPATCH_QUEUE_PRIORITY_LOW:
+ return &_dispatch_root_queues[DISPATCH_ROOT_QUEUE_IDX_LOW_PRIORITY];
+ case DISPATCH_QUEUE_PRIORITY_DEFAULT:
+ return &_dispatch_root_queues[DISPATCH_ROOT_QUEUE_IDX_DEFAULT_PRIORITY];
+ case DISPATCH_QUEUE_PRIORITY_HIGH:
+ return &_dispatch_root_queues[DISPATCH_ROOT_QUEUE_IDX_HIGH_PRIORITY];
+ case DISPATCH_QUEUE_PRIORITY_BACKGROUND:
+ return &_dispatch_root_queues[
+ DISPATCH_ROOT_QUEUE_IDX_BACKGROUND_PRIORITY];
+ default:
+ return NULL;
+ }
+}
+
+// Note to later developers: ensure that any initialization changes are
+// made for statically allocated queues (i.e. _dispatch_main_q).
+static inline void
+_dispatch_queue_init(dispatch_queue_t dq)
+{
+ dq->do_vtable = &_dispatch_queue_vtable;
+ dq->do_next = DISPATCH_OBJECT_LISTLESS;
+ dq->do_ref_cnt = 1;
+ dq->do_xref_cnt = 1;
+ // Default target queue is overcommit!
+ dq->do_targetq = _dispatch_get_root_queue(0, true);
+ dq->dq_running = 0;
+ dq->dq_width = 1;
+ dq->dq_serialnum = dispatch_atomic_inc(&_dispatch_queue_serial_numbers) - 1;
}
#endif
diff --git a/src/semaphore.c b/src/semaphore.c
index 8abc675..29585bd 100644
--- a/src/semaphore.c
+++ b/src/semaphore.c
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -22,28 +22,34 @@
// semaphores are too fundamental to use the dispatch_assume*() macros
#if USE_MACH_SEM
-#define DISPATCH_SEMAPHORE_VERIFY_KR(x) do { \
- if (x) { \
- DISPATCH_CRASH("flawed group/semaphore logic"); \
- } \
+#define DISPATCH_SEMAPHORE_VERIFY_KR(x) do { \
+ if (slowpath(x)) { \
+ DISPATCH_CRASH("flawed group/semaphore logic"); \
+ } \
+ } while (0)
+#elif USE_POSIX_SEM
+#define DISPATCH_SEMAPHORE_VERIFY_RET(x) do { \
+ if (slowpath((x) == -1)) { \
+ DISPATCH_CRASH("flawed group/semaphore logic"); \
+ } \
} while (0)
#endif
-#if USE_POSIX_SEM
-#define DISPATCH_SEMAPHORE_VERIFY_RET(x) do { \
- if ((x) == -1) { \
- DISPATCH_CRASH("flawed group/semaphore logic"); \
- } \
- } while (0)
-#endif
+
+DISPATCH_WEAK // rdar://problem/8503746
+long _dispatch_semaphore_signal_slow(dispatch_semaphore_t dsema);
+
+static void _dispatch_semaphore_dispose(dispatch_semaphore_t dsema);
+static size_t _dispatch_semaphore_debug(dispatch_semaphore_t dsema, char *buf,
+ size_t bufsiz);
+static long _dispatch_group_wake(dispatch_semaphore_t dsema);
+
+#pragma mark -
+#pragma mark dispatch_semaphore_t
struct dispatch_semaphore_vtable_s {
DISPATCH_VTABLE_HEADER(dispatch_semaphore_s);
};
-static void _dispatch_semaphore_dispose(dispatch_semaphore_t dsema);
-static size_t _dispatch_semaphore_debug(dispatch_semaphore_t dsema, char *buf, size_t bufsiz);
-static long _dispatch_group_wake(dispatch_semaphore_t dsema);
-
const struct dispatch_semaphore_vtable_s _dispatch_semaphore_vtable = {
.do_type = DISPATCH_SEMAPHORE_TYPE,
.do_kind = "semaphore",
@@ -52,67 +58,34 @@
};
dispatch_semaphore_t
-_dispatch_get_thread_semaphore(void)
-{
- dispatch_semaphore_t dsema;
-
- dsema = (dispatch_semaphore_t)fastpath(_dispatch_thread_getspecific(dispatch_sema4_key));
- if (!dsema) {
- while (!(dsema = dispatch_semaphore_create(0))) {
- sleep(1);
- }
- }
- _dispatch_thread_setspecific(dispatch_sema4_key, NULL);
- return dsema;
-}
-
-void
-_dispatch_put_thread_semaphore(dispatch_semaphore_t dsema)
-{
- dispatch_semaphore_t old_sema = (dispatch_semaphore_t)_dispatch_thread_getspecific(dispatch_sema4_key);
- _dispatch_thread_setspecific(dispatch_sema4_key, dsema);
- if (old_sema) {
- dispatch_release(old_sema);
- }
-}
-
-dispatch_group_t
-dispatch_group_create(void)
-{
- return (dispatch_group_t)dispatch_semaphore_create(LONG_MAX);
-}
-
-dispatch_semaphore_t
dispatch_semaphore_create(long value)
{
dispatch_semaphore_t dsema;
-#if USE_POSIX_SEM
- int ret;
-#endif
-
+
// If the internal value is negative, then the absolute of the value is
// equal to the number of waiting threads. Therefore it is bogus to
// initialize the semaphore with a negative value.
if (value < 0) {
return NULL;
}
-
- dsema = (dispatch_semaphore_t)calloc(1, sizeof(struct dispatch_semaphore_s));
-
+
+ dsema = calloc(1, sizeof(struct dispatch_semaphore_s));
+
if (fastpath(dsema)) {
dsema->do_vtable = &_dispatch_semaphore_vtable;
- dsema->do_next = (dispatch_semaphore_t)DISPATCH_OBJECT_LISTLESS;
+ dsema->do_next = DISPATCH_OBJECT_LISTLESS;
dsema->do_ref_cnt = 1;
dsema->do_xref_cnt = 1;
- dsema->do_targetq = dispatch_get_global_queue(0, 0);
+ dsema->do_targetq = dispatch_get_global_queue(
+ DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dsema->dsema_value = value;
dsema->dsema_orig = value;
#if USE_POSIX_SEM
- ret = sem_init(&dsema->dsema_sem, 0, 0);
- (void)dispatch_assume_zero(ret);
+ int ret = sem_init(&dsema->dsema_sem, 0, 0);
+ DISPATCH_SEMAPHORE_VERIFY_RET(ret);
#endif
}
-
+
return dsema;
}
@@ -126,19 +99,20 @@
if (*s4) {
return;
}
-
+
// lazily allocate the semaphore port
-
+
// Someday:
// 1) Switch to a doubly-linked FIFO in user-space.
// 2) User-space timers for the timeout.
// 3) Use the per-thread semaphore port.
-
- while (dispatch_assume_zero(kr = semaphore_create(mach_task_self(), &tmp, SYNC_POLICY_FIFO, 0))) {
+
+ while ((kr = semaphore_create(mach_task_self(), &tmp,
+ SYNC_POLICY_FIFO, 0))) {
DISPATCH_VERIFY_MIG(kr);
sleep(1);
}
-
+
if (!dispatch_atomic_cmpxchg(s4, 0, tmp)) {
kr = semaphore_destroy(mach_task_self(), tmp);
DISPATCH_SEMAPHORE_VERIFY_KR(kr);
@@ -148,63 +122,115 @@
}
#endif
-#if USE_WIN32_SEM
static void
-_dispatch_semaphore_create_handle(HANDLE *s4)
+_dispatch_semaphore_dispose(dispatch_semaphore_t dsema)
{
- HANDLE tmp;
-
- if (*s4) {
- return;
+ if (dsema->dsema_value < dsema->dsema_orig) {
+ DISPATCH_CLIENT_CRASH(
+ "Semaphore/group object deallocated while in use");
}
- // lazily allocate the semaphore port
-
- while (dispatch_assume(tmp = CreateSemaphore(NULL, 0, LONG_MAX, NULL)) == NULL) {
- sleep(1);
+#if USE_MACH_SEM
+ kern_return_t kr;
+ if (dsema->dsema_port) {
+ kr = semaphore_destroy(mach_task_self(), dsema->dsema_port);
+ DISPATCH_SEMAPHORE_VERIFY_KR(kr);
}
-
- if (!dispatch_atomic_cmpxchg(s4, 0, tmp)) {
- CloseHandle(tmp);
+ if (dsema->dsema_waiter_port) {
+ kr = semaphore_destroy(mach_task_self(), dsema->dsema_waiter_port);
+ DISPATCH_SEMAPHORE_VERIFY_KR(kr);
}
+#elif USE_POSIX_SEM
+ int ret = sem_destroy(&dsema->dsema_sem);
+ DISPATCH_SEMAPHORE_VERIFY_RET(ret);
+#endif
+
+ _dispatch_dispose(dsema);
}
-#endif /* USE_WIN32_SEM */
+
+static size_t
+_dispatch_semaphore_debug(dispatch_semaphore_t dsema, char *buf, size_t bufsiz)
+{
+ size_t offset = 0;
+ offset += snprintf(&buf[offset], bufsiz - offset, "%s[%p] = { ",
+ dx_kind(dsema), dsema);
+ offset += _dispatch_object_debug_attr(dsema, &buf[offset], bufsiz - offset);
+#if USE_MACH_SEM
+ offset += snprintf(&buf[offset], bufsiz - offset, "port = 0x%u, ",
+ dsema->dsema_port);
+#endif
+ offset += snprintf(&buf[offset], bufsiz - offset,
+ "value = %ld, orig = %ld }", dsema->dsema_value, dsema->dsema_orig);
+ return offset;
+}
+
+DISPATCH_NOINLINE
+long
+_dispatch_semaphore_signal_slow(dispatch_semaphore_t dsema)
+{
+ // Before dsema_sent_ksignals is incremented we can rely on the reference
+ // held by the waiter. However, once this value is incremented the waiter
+ // may return between the atomic increment and the semaphore_signal(),
+ // therefore an explicit reference must be held in order to safely access
+ // dsema after the atomic increment.
+ _dispatch_retain(dsema);
+
+ (void)dispatch_atomic_inc2o(dsema, dsema_sent_ksignals);
+
+#if USE_MACH_SEM
+ _dispatch_semaphore_create_port(&dsema->dsema_port);
+ kern_return_t kr = semaphore_signal(dsema->dsema_port);
+ DISPATCH_SEMAPHORE_VERIFY_KR(kr);
+#elif USE_POSIX_SEM
+ int ret = sem_post(&dsema->dsema_sem);
+ DISPATCH_SEMAPHORE_VERIFY_RET(ret);
+#endif
+
+ _dispatch_release(dsema);
+ return 1;
+}
+
+long
+dispatch_semaphore_signal(dispatch_semaphore_t dsema)
+{
+ dispatch_atomic_release_barrier();
+ long value = dispatch_atomic_inc2o(dsema, dsema_value);
+ if (fastpath(value > 0)) {
+ return 0;
+ }
+ if (slowpath(value == LONG_MIN)) {
+ DISPATCH_CLIENT_CRASH("Unbalanced call to dispatch_group_leave() or "
+ "dispatch_semaphore_signal()");
+ }
+ return _dispatch_semaphore_signal_slow(dsema);
+}
DISPATCH_NOINLINE
static long
-_dispatch_semaphore_wait_slow(dispatch_semaphore_t dsema, dispatch_time_t timeout)
+_dispatch_semaphore_wait_slow(dispatch_semaphore_t dsema,
+ dispatch_time_t timeout)
{
-#if USE_MACH_SEM
- mach_timespec_t _timeout;
- kern_return_t kr;
-#endif
-#if USE_POSIX_SEM
- struct timespec _timeout;
-#endif
-#if USE_POSIX_SEM || USE_WIN32_SEM
- int ret;
-#endif
long orig;
-
+
again:
- // Mach semaphores appear to sometimes spuriously wake up. Therefore,
+ // Mach semaphores appear to sometimes spuriously wake up. Therefore,
// we keep a parallel count of the number of times a Mach semaphore is
// signaled (6880961).
while ((orig = dsema->dsema_sent_ksignals)) {
- if (dispatch_atomic_cmpxchg(&dsema->dsema_sent_ksignals, orig, orig - 1)) {
+ if (dispatch_atomic_cmpxchg2o(dsema, dsema_sent_ksignals, orig,
+ orig - 1)) {
return 0;
}
}
#if USE_MACH_SEM
+ mach_timespec_t _timeout;
+ kern_return_t kr;
+
_dispatch_semaphore_create_port(&dsema->dsema_port);
-#endif
-#if USE_WIN32_SEM
- _dispatch_semaphore_create_handle(&dsema->dsema_handle);
-#endif
// From xnu/osfmk/kern/sync_sema.c:
- // wait_semaphore->count = -1; /* we don't keep an actual count */
+ // wait_semaphore->count = -1; /* we don't keep an actual count */
//
// The code above does not match the documentation, and that fact is
// not surprising. The documented semantics are clumsy to use in any
@@ -213,11 +239,8 @@
switch (timeout) {
default:
-#if USE_MACH_SEM
do {
- uint64_t nsec;
- // timeout() already calculates relative time left
- nsec = _dispatch_timeout(timeout);
+ uint64_t nsec = _dispatch_timeout(timeout);
_timeout.tv_sec = (typeof(_timeout.tv_sec))(nsec / NSEC_PER_SEC);
_timeout.tv_nsec = (typeof(_timeout.tv_nsec))(nsec % NSEC_PER_SEC);
kr = slowpath(semaphore_timedwait(dsema->dsema_port, _timeout));
@@ -227,171 +250,135 @@
DISPATCH_SEMAPHORE_VERIFY_KR(kr);
break;
}
-#endif
-#if USE_POSIX_SEM
- do {
- _timeout = _dispatch_timeout_ts(timeout);
- ret = slowpath(sem_timedwait(&dsema->dsema_sem,
- &_timeout));
- } while (ret == -1 && errno == EINTR);
-
- if (!(ret == -1 && errno == ETIMEDOUT)) {
- DISPATCH_SEMAPHORE_VERIFY_RET(ret);
- break;
- }
-#endif
-#if USE_WIN32_SEM
- do {
- uint64_t nsec;
- DWORD msec;
- nsec = _dispatch_timeout(timeout);
- msec = (DWORD)(nsec / (uint64_t)1000000);
- ret = WaitForSingleObject(dsema->dsema_handle, msec);
- } while (ret != WAIT_OBJECT_0 && ret != WAIT_TIMEOUT);
- if (ret != WAIT_TIMEOUT) {
- break;
- }
-#endif /* USE_WIN32_SEM */
- // Fall through and try to undo what the fast path did to dsema->dsema_value
+ // Fall through and try to undo what the fast path did to
+ // dsema->dsema_value
case DISPATCH_TIME_NOW:
while ((orig = dsema->dsema_value) < 0) {
- if (dispatch_atomic_cmpxchg(&dsema->dsema_value, orig, orig + 1)) {
-#if USE_MACH_SEM
+ if (dispatch_atomic_cmpxchg2o(dsema, dsema_value, orig, orig + 1)) {
return KERN_OPERATION_TIMED_OUT;
-#endif
-#if USE_POSIX_SEM || USE_WIN32_SEM
- errno = ETIMEDOUT;
- return -1;
-#endif
}
}
// Another thread called semaphore_signal().
// Fall through and drain the wakeup.
case DISPATCH_TIME_FOREVER:
-#if USE_MACH_SEM
do {
kr = semaphore_wait(dsema->dsema_port);
} while (kr == KERN_ABORTED);
DISPATCH_SEMAPHORE_VERIFY_KR(kr);
-#endif
-#if USE_POSIX_SEM
+ break;
+ }
+#elif USE_POSIX_SEM
+ struct timespec _timeout;
+ int ret;
+
+ switch (timeout) {
+ default:
+ do {
+ uint64_t nsec = _dispatch_timeout(timeout);
+ _timeout.tv_sec = (typeof(_timeout.tv_sec))(nsec / NSEC_PER_SEC);
+ _timeout.tv_nsec = (typeof(_timeout.tv_nsec))(nsec % NSEC_PER_SEC);
+ ret = slowpath(sem_timedwait(&dsema->dsema_sem, &_timeout));
+ } while (ret == -1 && errno == EINTR);
+
+ if (ret == -1 && errno != ETIMEDOUT) {
+ DISPATCH_SEMAPHORE_VERIFY_RET(ret);
+ break;
+ }
+ // Fall through and try to undo what the fast path did to
+ // dsema->dsema_value
+ case DISPATCH_TIME_NOW:
+ while ((orig = dsema->dsema_value) < 0) {
+ if (dispatch_atomic_cmpxchg2o(dsema, dsema_value, orig, orig + 1)) {
+ errno = ETIMEDOUT;
+ return -1;
+ }
+ }
+ // Another thread called semaphore_signal().
+ // Fall through and drain the wakeup.
+ case DISPATCH_TIME_FOREVER:
do {
ret = sem_wait(&dsema->dsema_sem);
} while (ret != 0);
DISPATCH_SEMAPHORE_VERIFY_RET(ret);
-#endif
-#if USE_WIN32_SEM
- do {
- ret = WaitForSingleObject(dsema->dsema_handle, INFINITE);
- } while (ret != WAIT_OBJECT_0);
-#endif
break;
}
+#endif
goto again;
}
-DISPATCH_NOINLINE
+long
+dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
+{
+ long value = dispatch_atomic_dec2o(dsema, dsema_value);
+ dispatch_atomic_acquire_barrier();
+ if (fastpath(value >= 0)) {
+ return 0;
+ }
+ return _dispatch_semaphore_wait_slow(dsema, timeout);
+}
+
+#pragma mark -
+#pragma mark dispatch_group_t
+
+dispatch_group_t
+dispatch_group_create(void)
+{
+ return (dispatch_group_t)dispatch_semaphore_create(LONG_MAX);
+}
+
void
dispatch_group_enter(dispatch_group_t dg)
{
dispatch_semaphore_t dsema = (dispatch_semaphore_t)dg;
-#if USE_APPLE_SEMAPHORE_OPTIMIZATIONS && defined(__OPTIMIZE__) && defined(__GNUC__) && (defined(__x86_64__) || defined(__i386__)) && !defined(__llvm__)
- // This assumes:
- // 1) Way too much about the optimizer of GCC.
- // 2) There will never be more than LONG_MAX threads.
- // Therefore: no overflow detection
- asm(
-#ifdef __LP64__
- "lock decq %0\n\t"
-#else
- "lock decl %0\n\t"
-#endif
- "js 1f\n\t"
- "ret\n\t"
- "1:"
- : "+m" (dsema->dsema_value)
- :
- : "cc"
- );
- _dispatch_semaphore_wait_slow(dsema, DISPATCH_TIME_FOREVER);
-#else
- dispatch_semaphore_wait(dsema, DISPATCH_TIME_FOREVER);
-#endif
-}
-DISPATCH_NOINLINE
-long
-dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
-{
-#if USE_APPLE_SEMAPHORE_OPTIMIZATIONS && defined(__OPTIMIZE__) && defined(__GNUC__) && (defined(__x86_64__) || defined(__i386__)) && !defined(__llvm__)
- // This assumes:
- // 1) Way too much about the optimizer of GCC.
- // 2) There will never be more than LONG_MAX threads.
- // Therefore: no overflow detection
- asm(
-#ifdef __LP64__
- "lock decq %0\n\t"
-#else
- "lock decl %0\n\t"
-#endif
- "js 1f\n\t"
- "xor %%eax, %%eax\n\t"
- "ret\n\t"
- "1:"
- : "+m" (dsema->dsema_value)
- :
- : "cc"
- );
-#else
- if (dispatch_atomic_dec(&dsema->dsema_value) >= 0) {
- return 0;
- }
-#endif
- return _dispatch_semaphore_wait_slow(dsema, timeout);
+ (void)dispatch_semaphore_wait(dsema, DISPATCH_TIME_FOREVER);
}
DISPATCH_NOINLINE
static long
-_dispatch_semaphore_signal_slow(dispatch_semaphore_t dsema)
+_dispatch_group_wake(dispatch_semaphore_t dsema)
{
-#if USE_POSIX_SEM || USE_WIN32_SEM
- int ret;
-#endif
-#if USE_MACH_SEM
- kern_return_t kr;
-
- _dispatch_semaphore_create_port(&dsema->dsema_port);
-#endif
-#if USE_WIN32_SEM
- _dispatch_semaphore_create_handle(&dsema->dsema_handle);
-#endif
+ struct dispatch_sema_notify_s *next, *head, *tail = NULL;
+ long rval;
- // Before dsema_sent_ksignals is incremented we can rely on the reference
- // held by the waiter. However, once this value is incremented the waiter
- // may return between the atomic increment and the semaphore_signal(),
- // therefore an explicit reference must be held in order to safely access
- // dsema after the atomic increment.
- _dispatch_retain(dsema);
-
- dispatch_atomic_inc(&dsema->dsema_sent_ksignals);
-
+ head = dispatch_atomic_xchg2o(dsema, dsema_notify_head, NULL);
+ if (head) {
+ // snapshot before anything is notified/woken <rdar://problem/8554546>
+ tail = dispatch_atomic_xchg2o(dsema, dsema_notify_tail, NULL);
+ }
+ rval = dispatch_atomic_xchg2o(dsema, dsema_group_waiters, 0);
+ if (rval) {
+ // wake group waiters
#if USE_MACH_SEM
- kr = semaphore_signal(dsema->dsema_port);
- DISPATCH_SEMAPHORE_VERIFY_KR(kr);
+ _dispatch_semaphore_create_port(&dsema->dsema_waiter_port);
+ do {
+ kern_return_t kr = semaphore_signal(dsema->dsema_waiter_port);
+ DISPATCH_SEMAPHORE_VERIFY_KR(kr);
+ } while (--rval);
+#elif USE_POSIX_SEM
+ do {
+ int ret = sem_post(&dsema->dsema_sem);
+ DISPATCH_SEMAPHORE_VERIFY_RET(ret);
+ } while (--rval);
#endif
-#if USE_POSIX_SEM
- ret = sem_post(&dsema->dsema_sem);
- DISPATCH_SEMAPHORE_VERIFY_RET(ret);
-#endif
-#if USE_WIN32_SEM
- // Signal the semaphore.
- ret = ReleaseSemaphore(dsema->dsema_handle, 1, NULL);
-#endif
-
- _dispatch_release(dsema);
-
- return 1;
+ }
+ if (head) {
+ // async group notify blocks
+ do {
+ dispatch_async_f(head->dsn_queue, head->dsn_ctxt, head->dsn_func);
+ _dispatch_release(head->dsn_queue);
+ next = fastpath(head->dsn_next);
+ if (!next && head != tail) {
+ while (!(next = fastpath(head->dsn_next))) {
+ _dispatch_hardware_pause();
+ }
+ }
+ free(head);
+ } while ((head = next));
+ _dispatch_release(dsema);
+ }
+ return 0;
}
void
@@ -400,213 +387,115 @@
dispatch_semaphore_t dsema = (dispatch_semaphore_t)dg;
dispatch_semaphore_signal(dsema);
-
if (dsema->dsema_value == dsema->dsema_orig) {
- _dispatch_group_wake(dsema);
+ (void)_dispatch_group_wake(dsema);
}
}
DISPATCH_NOINLINE
-long
-dispatch_semaphore_signal(dispatch_semaphore_t dsema)
-{
-#if USE_APPLE_SEMAPHORE_OPTIMIZATIONS && defined(__OPTIMIZE__) && defined(__GNUC__) && (defined(__x86_64__) || defined(__i386__)) && !defined(__llvm__)
- // overflow detection
- // this assumes way too much about the optimizer of GCC
- asm(
-#ifdef __LP64__
- "lock incq %0\n\t"
-#else
- "lock incl %0\n\t"
-#endif
- "jo 1f\n\t"
- "jle 2f\n\t"
- "xor %%eax, %%eax\n\t"
- "ret\n\t"
- "1:\n\t"
- "int $4\n\t"
- "2:"
- : "+m" (dsema->dsema_value)
- :
- : "cc"
- );
-#else
- if (dispatch_atomic_inc(&dsema->dsema_value) > 0) {
- return 0;
- }
-#endif
- return _dispatch_semaphore_signal_slow(dsema);
-}
-
-DISPATCH_NOINLINE
-long
-_dispatch_group_wake(dispatch_semaphore_t dsema)
-{
- struct dispatch_sema_notify_s *tmp;
- struct dispatch_sema_notify_s *head = (struct dispatch_sema_notify_s *)dispatch_atomic_xchg(&dsema->dsema_notify_head, NULL);
- long rval = (long)dispatch_atomic_xchg(&dsema->dsema_group_waiters, 0);
- bool do_rel = (head != NULL);
-#if USE_MACH_SEM
- long kr;
-#endif
-#if USE_POSIX_SEM || USE_WIN32_SEM
- int ret;
-#endif
-
- // wake any "group" waiter or notify blocks
-
- if (rval) {
-#if USE_MACH_SEM
- _dispatch_semaphore_create_port(&dsema->dsema_waiter_port);
- do {
- kr = semaphore_signal(dsema->dsema_waiter_port);
- DISPATCH_SEMAPHORE_VERIFY_KR(kr);
- } while (--rval);
-#endif
-#if USE_POSIX_SEM
- do {
- ret = sem_post(&dsema->dsema_sem);
- DISPATCH_SEMAPHORE_VERIFY_RET(ret);
- } while (--rval);
-#endif
-#if USE_WIN32_SEM
- // Signal the semaphore.
- ret = ReleaseSemaphore(dsema->dsema_waiter_handle, 1, NULL);
- dispatch_assume(ret);
-#endif
- }
- while (head) {
- dispatch_async_f(head->dsn_queue, head->dsn_ctxt, head->dsn_func);
- _dispatch_release(head->dsn_queue);
- do {
- tmp = head->dsn_next;
- } while (!tmp && !dispatch_atomic_cmpxchg(&dsema->dsema_notify_tail, head, NULL));
- free(head);
- head = tmp;
- }
- if (do_rel) {
- _dispatch_release(dsema);
- }
- return 0;
-}
-
-DISPATCH_NOINLINE
static long
_dispatch_group_wait_slow(dispatch_semaphore_t dsema, dispatch_time_t timeout)
{
-#if USE_MACH_SEM
- mach_timespec_t _timeout;
- kern_return_t kr;
-#endif
-#if USE_POSIX_SEM
- struct timespec _timeout;
-#endif
-#if USE_POSIX_SEM || USE_WIN32_SEM
- int ret;
-#endif
long orig;
-
+
again:
- // check before we cause another signal to be sent by incrementing dsema->dsema_group_waiters
+ // check before we cause another signal to be sent by incrementing
+ // dsema->dsema_group_waiters
if (dsema->dsema_value == dsema->dsema_orig) {
return _dispatch_group_wake(dsema);
}
- // Mach semaphores appear to sometimes spuriously wake up. Therefore,
+ // Mach semaphores appear to sometimes spuriously wake up. Therefore,
// we keep a parallel count of the number of times a Mach semaphore is
// signaled (6880961).
- (void)dispatch_atomic_inc(&dsema->dsema_group_waiters);
+ (void)dispatch_atomic_inc2o(dsema, dsema_group_waiters);
// check the values again in case we need to wake any threads
if (dsema->dsema_value == dsema->dsema_orig) {
return _dispatch_group_wake(dsema);
}
#if USE_MACH_SEM
+ mach_timespec_t _timeout;
+ kern_return_t kr;
+
_dispatch_semaphore_create_port(&dsema->dsema_waiter_port);
-#endif
-
+
// From xnu/osfmk/kern/sync_sema.c:
- // wait_semaphore->count = -1; /* we don't keep an actual count */
+ // wait_semaphore->count = -1; /* we don't keep an actual count */
//
// The code above does not match the documentation, and that fact is
// not surprising. The documented semantics are clumsy to use in any
// practical way. The above hack effectively tricks the rest of the
// Mach semaphore logic to behave like the libdispatch algorithm.
-
+
switch (timeout) {
default:
-#if USE_MACH_SEM
do {
- uint64_t nsec;
- nsec = _dispatch_timeout(timeout);
+ uint64_t nsec = _dispatch_timeout(timeout);
_timeout.tv_sec = (typeof(_timeout.tv_sec))(nsec / NSEC_PER_SEC);
_timeout.tv_nsec = (typeof(_timeout.tv_nsec))(nsec % NSEC_PER_SEC);
- kr = slowpath(semaphore_timedwait(dsema->dsema_waiter_port, _timeout));
+ kr = slowpath(semaphore_timedwait(dsema->dsema_waiter_port,
+ _timeout));
} while (kr == KERN_ABORTED);
+
if (kr != KERN_OPERATION_TIMED_OUT) {
DISPATCH_SEMAPHORE_VERIFY_KR(kr);
break;
}
-#endif
-#if USE_POSIX_SEM
+ // Fall through and try to undo the earlier change to
+ // dsema->dsema_group_waiters
+ case DISPATCH_TIME_NOW:
+ while ((orig = dsema->dsema_group_waiters)) {
+ if (dispatch_atomic_cmpxchg2o(dsema, dsema_group_waiters, orig,
+ orig - 1)) {
+ return KERN_OPERATION_TIMED_OUT;
+ }
+ }
+ // Another thread called semaphore_signal().
+ // Fall through and drain the wakeup.
+ case DISPATCH_TIME_FOREVER:
do {
- _timeout = _dispatch_timeout_ts(timeout);
- ret = slowpath(sem_timedwait(&dsema->dsema_sem,
- &_timeout));
+ kr = semaphore_wait(dsema->dsema_waiter_port);
+ } while (kr == KERN_ABORTED);
+ DISPATCH_SEMAPHORE_VERIFY_KR(kr);
+ break;
+ }
+#elif USE_POSIX_SEM
+ struct timespec _timeout;
+ int ret;
+
+ switch (timeout) {
+ default:
+ do {
+ uint64_t nsec = _dispatch_timeout(timeout);
+ _timeout.tv_sec = (typeof(_timeout.tv_sec))(nsec / NSEC_PER_SEC);
+ _timeout.tv_nsec = (typeof(_timeout.tv_nsec))(nsec % NSEC_PER_SEC);
+ ret = slowpath(sem_timedwait(&dsema->dsema_sem, &_timeout));
} while (ret == -1 && errno == EINTR);
if (!(ret == -1 && errno == ETIMEDOUT)) {
DISPATCH_SEMAPHORE_VERIFY_RET(ret);
break;
}
-#endif
-#if USE_WIN32_SEM
- do {
- uint64_t nsec;
- DWORD msec;
- nsec = _dispatch_timeout(timeout);
- msec = (DWORD)(nsec / (uint64_t)1000000);
- ret = WaitForSingleObject(dsema->dsema_waiter_handle, msec);
- } while (ret != WAIT_OBJECT_0 && ret != WAIT_TIMEOUT);
- if (ret == WAIT_TIMEOUT) {
- break;
- }
-#endif /* USE_WIN32_SEM */
- // Fall through and try to undo the earlier change to dsema->dsema_group_waiters
+ // Fall through and try to undo the earlier change to
+ // dsema->dsema_group_waiters
case DISPATCH_TIME_NOW:
while ((orig = dsema->dsema_group_waiters)) {
- if (dispatch_atomic_cmpxchg(&dsema->dsema_group_waiters, orig, orig - 1)) {
-#if USE_MACH_SEM
- return KERN_OPERATION_TIMED_OUT;
-#endif
-#if USE_POSIX_SEM || USE_WIN32_SEM
+ if (dispatch_atomic_cmpxchg2o(dsema, dsema_group_waiters, orig,
+ orig - 1)) {
errno = ETIMEDOUT;
return -1;
-#endif
}
}
// Another thread called semaphore_signal().
// Fall through and drain the wakeup.
case DISPATCH_TIME_FOREVER:
-#if USE_MACH_SEM
- do {
- kr = semaphore_wait(dsema->dsema_waiter_port);
- } while (kr == KERN_ABORTED);
- DISPATCH_SEMAPHORE_VERIFY_KR(kr);
-#endif
-#if USE_POSIX_SEM
do {
ret = sem_wait(&dsema->dsema_sem);
} while (ret == -1 && errno == EINTR);
DISPATCH_SEMAPHORE_VERIFY_RET(ret);
-#endif
-#if USE_WIN32_SEM
- do {
- ret = WaitForSingleObject(dsema->dsema_waiter_handle, INFINITE);
- } while (ret != WAIT_OBJECT_0);
-#endif
-
break;
}
+#endif
goto again;
}
@@ -622,8 +511,7 @@
if (timeout == 0) {
#if USE_MACH_SEM
return KERN_OPERATION_TIMED_OUT;
-#endif
-#if USE_POSIX_SEM || USE_WIN32_SEM
+#elif USE_POSIX_SEM
errno = ETIMEDOUT;
return (-1);
#endif
@@ -631,32 +519,25 @@
return _dispatch_group_wait_slow(dsema, timeout);
}
-#ifdef __BLOCKS__
+DISPATCH_NOINLINE
void
-dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq, dispatch_block_t db)
-{
- dispatch_group_notify_f(dg, dq, _dispatch_Block_copy(db), _dispatch_call_block_and_release);
-}
-#endif
-
-void
-dispatch_group_notify_f(dispatch_group_t dg, dispatch_queue_t dq, void *ctxt, void (*func)(void *))
+dispatch_group_notify_f(dispatch_group_t dg, dispatch_queue_t dq, void *ctxt,
+ void (*func)(void *))
{
dispatch_semaphore_t dsema = (dispatch_semaphore_t)dg;
struct dispatch_sema_notify_s *dsn, *prev;
// FIXME -- this should be updated to use the continuation cache
- while (!(dsn = (struct dispatch_sema_notify_s *)malloc(sizeof(*dsn)))) {
+ while (!(dsn = calloc(1, sizeof(*dsn)))) {
sleep(1);
}
- dsn->dsn_next = NULL;
dsn->dsn_queue = dq;
dsn->dsn_ctxt = ctxt;
dsn->dsn_func = func;
_dispatch_retain(dq);
-
- prev = (struct dispatch_sema_notify_s *)dispatch_atomic_xchg(&dsema->dsema_notify_tail, dsn);
+ dispatch_atomic_store_barrier();
+ prev = dispatch_atomic_xchg2o(dsema, dsema_notify_tail, dsn);
if (fastpath(prev)) {
prev->dsn_next = dsn;
} else {
@@ -668,87 +549,108 @@
}
}
-void
-_dispatch_semaphore_dispose(dispatch_semaphore_t dsema)
-{
-#if USE_MACH_SEM
- kern_return_t kr;
-#endif
-#if USE_POSIX_SEM
- int ret;
-#endif
-
- if (dsema->dsema_value < dsema->dsema_orig) {
- DISPATCH_CLIENT_CRASH("Semaphore/group object deallocated while in use");
- }
-
-#if USE_MACH_SEM
- if (dsema->dsema_port) {
- kr = semaphore_destroy(mach_task_self(), dsema->dsema_port);
- DISPATCH_SEMAPHORE_VERIFY_KR(kr);
- }
- if (dsema->dsema_waiter_port) {
- kr = semaphore_destroy(mach_task_self(), dsema->dsema_waiter_port);
- DISPATCH_SEMAPHORE_VERIFY_KR(kr);
- }
-#endif
-#if USE_POSIX_SEM
- ret = sem_destroy(&dsema->dsema_sem);
- DISPATCH_SEMAPHORE_VERIFY_RET(ret);
-#endif
-#if USE_WIN32_SEM
- if (dsema->dsema_handle) {
- CloseHandle(dsema->dsema_handle);
- }
- if (dsema->dsema_waiter_handle) {
- CloseHandle(dsema->dsema_waiter_handle);
- }
-#endif
-
- _dispatch_dispose(dsema);
-}
-
-size_t
-_dispatch_semaphore_debug(dispatch_semaphore_t dsema, char *buf, size_t bufsiz)
-{
- size_t offset = 0;
- offset += snprintf(&buf[offset], bufsiz - offset, "%s[%p] = { ", dx_kind(dsema), dsema);
- offset += dispatch_object_debug_attr(dsema, &buf[offset], bufsiz - offset);
-#if USE_MACH_SEM
- offset += snprintf(&buf[offset], bufsiz - offset, "port = 0x%u, ",
- dsema->dsema_port);
-#endif
- offset += snprintf(&buf[offset], bufsiz - offset,
- "value = %ld, orig = %ld }", dsema->dsema_value, dsema->dsema_orig);
- return offset;
-}
-
#ifdef __BLOCKS__
void
-dispatch_group_async(dispatch_group_t dg, dispatch_queue_t dq, dispatch_block_t db)
+dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq,
+ dispatch_block_t db)
{
- dispatch_group_async_f(dg, dq, _dispatch_Block_copy(db), _dispatch_call_block_and_release);
+ dispatch_group_notify_f(dg, dq, _dispatch_Block_copy(db),
+ _dispatch_call_block_and_release);
}
#endif
+#pragma mark -
+#pragma mark _dispatch_thread_semaphore_t
+
+DISPATCH_NOINLINE
+static _dispatch_thread_semaphore_t
+_dispatch_thread_semaphore_create(void)
+{
+#if USE_MACH_SEM
+ semaphore_t s4;
+ kern_return_t kr;
+ while (slowpath(kr = semaphore_create(mach_task_self(), &s4,
+ SYNC_POLICY_FIFO, 0))) {
+ DISPATCH_VERIFY_MIG(kr);
+ sleep(1);
+ }
+ return s4;
+#elif USE_POSIX_SEM
+ sem_t s4;
+ int ret = sem_init(&s4, 0, 0);
+ DISPATCH_SEMAPHORE_VERIFY_RET(ret);
+ return s4;
+#endif
+}
+
DISPATCH_NOINLINE
void
-dispatch_group_async_f(dispatch_group_t dg, dispatch_queue_t dq, void *ctxt, void (*func)(void *))
+_dispatch_thread_semaphore_dispose(_dispatch_thread_semaphore_t sema)
{
- dispatch_continuation_t dc;
+#if USE_MACH_SEM
+ semaphore_t s4 = (semaphore_t)sema;
+ kern_return_t kr = semaphore_destroy(mach_task_self(), s4);
+ DISPATCH_SEMAPHORE_VERIFY_KR(kr);
+#elif USE_POSIX_SEM
+ sem_t s4 = (sem_t)sema;
+ int ret = sem_destroy(&s4);
+ DISPATCH_SEMAPHORE_VERIFY_RET(ret);
+#endif
+}
- _dispatch_retain(dg);
- dispatch_group_enter(dg);
+void
+_dispatch_thread_semaphore_signal(_dispatch_thread_semaphore_t sema)
+{
+#if USE_MACH_SEM
+ semaphore_t s4 = (semaphore_t)sema;
+ kern_return_t kr = semaphore_signal(s4);
+ DISPATCH_SEMAPHORE_VERIFY_KR(kr);
+#elif USE_POSIX_SEM
+ sem_t s4 = (sem_t)sema;
+ int ret = sem_post(&s4);
+ DISPATCH_SEMAPHORE_VERIFY_RET(ret);
+#endif
+}
- dc = _dispatch_continuation_alloc_cacheonly();
- if (dc == NULL) {
- dc = _dispatch_continuation_alloc_from_heap();
+void
+_dispatch_thread_semaphore_wait(_dispatch_thread_semaphore_t sema)
+{
+#if USE_MACH_SEM
+ semaphore_t s4 = (semaphore_t)sema;
+ kern_return_t kr;
+ do {
+ kr = semaphore_wait(s4);
+ } while (slowpath(kr == KERN_ABORTED));
+ DISPATCH_SEMAPHORE_VERIFY_KR(kr);
+#elif USE_POSIX_SEM
+ sem_t s4 = (sem_t)sema;
+ int ret;
+ do {
+ ret = sem_wait(&s4);
+ } while (slowpath(ret != 0));
+ DISPATCH_SEMAPHORE_VERIFY_RET(ret);
+#endif
+}
+
+_dispatch_thread_semaphore_t
+_dispatch_get_thread_semaphore(void)
+{
+ _dispatch_thread_semaphore_t sema = (_dispatch_thread_semaphore_t)
+ _dispatch_thread_getspecific(dispatch_sema4_key);
+ if (slowpath(!sema)) {
+ return _dispatch_thread_semaphore_create();
}
+ _dispatch_thread_setspecific(dispatch_sema4_key, NULL);
+ return sema;
+}
- dc->do_vtable = (void *)(DISPATCH_OBJ_ASYNC_BIT|DISPATCH_OBJ_GROUP_BIT);
- dc->dc_func = func;
- dc->dc_ctxt = ctxt;
- dc->dc_group = dg;
-
- _dispatch_queue_push(dq, dc);
+void
+_dispatch_put_thread_semaphore(_dispatch_thread_semaphore_t sema)
+{
+ _dispatch_thread_semaphore_t old_sema = (_dispatch_thread_semaphore_t)
+ _dispatch_thread_getspecific(dispatch_sema4_key);
+ _dispatch_thread_setspecific(dispatch_sema4_key, (void*)sema);
+ if (slowpath(old_sema)) {
+ return _dispatch_thread_semaphore_dispose(old_sema);
+ }
}
diff --git a/src/semaphore_internal.h b/src/semaphore_internal.h
index f56198f..e5b319e 100644
--- a/src/semaphore_internal.h
+++ b/src/semaphore_internal.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -28,7 +28,7 @@
#define __DISPATCH_SEMAPHORE_INTERNAL__
struct dispatch_sema_notify_s {
- struct dispatch_sema_notify_s *dsn_next;
+ struct dispatch_sema_notify_s *volatile dsn_next;
dispatch_queue_t dsn_queue;
void *dsn_ctxt;
void (*dsn_func)(void *);
@@ -39,16 +39,13 @@
long dsema_value;
long dsema_orig;
size_t dsema_sent_ksignals;
-#if (USE_MACH_SEM + USE_POSIX_SEM + USE_WIN32_SEM) > 1
+#if USE_MACH_SEM && USE_POSIX_SEM
#error "Too many supported semaphore types"
#elif USE_MACH_SEM
semaphore_t dsema_port;
semaphore_t dsema_waiter_port;
#elif USE_POSIX_SEM
sem_t dsema_sem;
-#elif USE_WIN32_SEM
- HANDLE dsema_handle;
- HANDLE dsema_waiter_handle;
#else
#error "No supported semaphore type"
#endif
@@ -59,4 +56,11 @@
extern const struct dispatch_semaphore_vtable_s _dispatch_semaphore_vtable;
+typedef uintptr_t _dispatch_thread_semaphore_t;
+_dispatch_thread_semaphore_t _dispatch_get_thread_semaphore(void);
+void _dispatch_put_thread_semaphore(_dispatch_thread_semaphore_t);
+void _dispatch_thread_semaphore_wait(_dispatch_thread_semaphore_t);
+void _dispatch_thread_semaphore_signal(_dispatch_thread_semaphore_t);
+void _dispatch_thread_semaphore_dispose(_dispatch_thread_semaphore_t);
+
#endif
diff --git a/src/shims.h b/src/shims.h
index 518d0d2..73322be 100644
--- a/src/shims.h
+++ b/src/shims.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -28,9 +28,6 @@
#define __DISPATCH_OS_SHIMS__
#include <pthread.h>
-#if HAVE_PTHREAD_MACHDEP_H
-#include <pthread_machdep.h>
-#endif
#if HAVE_PTHREAD_WORKQUEUES
#include <pthread_workqueue.h>
#endif
@@ -38,33 +35,40 @@
#include <pthread_np.h>
#endif
-#if USE_APPLE_CRASHREPORTER_INFO
-__private_extern__ const char *__crashreporter_info__;
-#endif
-
#if !HAVE_DECL_FD_COPY
-#define FD_COPY(f, t) (void)(*(t) = *(f))
+#define FD_COPY(f, t) (void)(*(t) = *(f))
#endif
-#if TARGET_OS_WIN32
-#define bzero(ptr,len) memset((ptr), 0, (len))
-#define snprintf _snprintf
-
-inline size_t strlcpy(char *dst, const char *src, size_t size) {
- int res = strlen(dst) + strlen(src) + 1;
- if (size > 0) {
- size_t n = size - 1;
- strncpy(dst, src, n);
- dst[n] = 0;
- }
- return res;
-}
+#if !HAVE_NORETURN_BUILTIN_TRAP
+/*
+ * XXXRW: Work-around for possible clang bug in which __builtin_trap() is not
+ * marked noreturn, leading to a build error as dispatch_main() *is* marked
+ * noreturn. Mask by marking __builtin_trap() as noreturn locally.
+ */
+DISPATCH_NORETURN
+void __builtin_trap(void);
#endif
+#include "shims/atomic.h"
+#include "shims/tsd.h"
+#include "shims/hw_config.h"
+#include "shims/perfmon.h"
+
#include "shims/getprogname.h"
#include "shims/malloc_zone.h"
-#include "shims/tsd.h"
-#include "shims/perfmon.h"
#include "shims/time.h"
+#ifdef __APPLE__
+// Clear the stack before calling long-running thread-handler functions that
+// never return (and don't take arguments), to facilitate leak detection and
+// provide cleaner backtraces. <rdar://problem/9050566>
+#define _dispatch_clear_stack(s) do { \
+ void *a[(s)/sizeof(void*) ? (s)/sizeof(void*) : 1]; \
+ a[0] = pthread_get_stackaddr_np(pthread_self()); \
+ bzero((void*)&a[1], a[0] - (void*)&a[1]); \
+ } while (0)
+#else
+#define _dispatch_clear_stack(s)
+#endif
+
#endif
diff --git a/src/shims/atomic.h b/src/shims/atomic.h
index ab66c04..fbc1171 100644
--- a/src/shims/atomic.h
+++ b/src/shims/atomic.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -24,49 +24,134 @@
* relying on these interfaces WILL break.
*/
-#ifndef __DISPATCH_HW_SHIMS__
-#define __DISPATCH_HW_SHIMS__
+#ifndef __DISPATCH_SHIMS_ATOMIC__
+#define __DISPATCH_SHIMS_ATOMIC__
-/* x86 has a 64 byte cacheline */
-#define DISPATCH_CACHELINE_SIZE 64
-#define ROUND_UP_TO_CACHELINE_SIZE(x) (((x) + (DISPATCH_CACHELINE_SIZE - 1)) & ~(DISPATCH_CACHELINE_SIZE - 1))
-#define ROUND_UP_TO_VECTOR_SIZE(x) (((x) + 15) & ~15)
+/* x86 & cortex-a8 have a 64 byte cacheline */
+#define DISPATCH_CACHELINE_SIZE 64
+#define ROUND_UP_TO_CACHELINE_SIZE(x) \
+ (((x) + (DISPATCH_CACHELINE_SIZE - 1)) & ~(DISPATCH_CACHELINE_SIZE - 1))
+#define ROUND_UP_TO_VECTOR_SIZE(x) \
+ (((x) + 15) & ~15)
+#define DISPATCH_CACHELINE_ALIGN \
+ __attribute__((__aligned__(DISPATCH_CACHELINE_SIZE)))
#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 2)
-// GCC generates suboptimal register pressure
-// LLVM does better, but doesn't support tail calls
-// 6248590 __sync_*() intrinsics force a gratuitous "lea" instruction, with resulting register pressure
-#if 0 && defined(__i386__) || defined(__x86_64__)
-#define dispatch_atomic_xchg(p, n) ({ typeof(*(p)) _r; asm("xchg %0, %1" : "=r" (_r) : "m" (*(p)), "0" (n)); _r; })
+
+#define _dispatch_atomic_barrier() __sync_synchronize()
+// see comment in dispatch_once.c
+#define dispatch_atomic_maximally_synchronizing_barrier() \
+ _dispatch_atomic_barrier()
+// assume atomic builtins provide barriers
+#define dispatch_atomic_barrier()
+#define dispatch_atomic_acquire_barrier()
+#define dispatch_atomic_release_barrier()
+#define dispatch_atomic_store_barrier()
+
+#define _dispatch_hardware_pause() asm("")
+#define _dispatch_debugger() asm("trap")
+
+#define dispatch_atomic_cmpxchg(p, e, n) \
+ __sync_bool_compare_and_swap((p), (e), (n))
+#if __has_builtin(__sync_swap)
+#define dispatch_atomic_xchg(p, n) \
+ ((typeof(*(p)))__sync_swap((p), (n)))
#else
-#define dispatch_atomic_xchg(p, n) ((typeof(*(p)))__sync_lock_test_and_set((p), (n)))
+#define dispatch_atomic_xchg(p, n) \
+ ((typeof(*(p)))__sync_lock_test_and_set((p), (n)))
#endif
-#define dispatch_atomic_cmpxchg(p, o, n) __sync_bool_compare_and_swap((p), (o), (n))
-#define dispatch_atomic_inc(p) __sync_add_and_fetch((p), 1)
-#define dispatch_atomic_dec(p) __sync_sub_and_fetch((p), 1)
#define dispatch_atomic_add(p, v) __sync_add_and_fetch((p), (v))
#define dispatch_atomic_sub(p, v) __sync_sub_and_fetch((p), (v))
#define dispatch_atomic_or(p, v) __sync_fetch_and_or((p), (v))
#define dispatch_atomic_and(p, v) __sync_fetch_and_and((p), (v))
-#if defined(__i386__) || defined(__x86_64__)
-/* GCC emits nothing for __sync_synchronize() on i386/x86_64. */
-#define dispatch_atomic_barrier() __asm__ __volatile__("mfence")
-#else
-#define dispatch_atomic_barrier() __sync_synchronize()
-#endif
+
+#define dispatch_atomic_inc(p) dispatch_atomic_add((p), 1)
+#define dispatch_atomic_dec(p) dispatch_atomic_sub((p), 1)
+// really just a low level abort()
+#define _dispatch_hardware_crash() __builtin_trap()
+
+#define dispatch_atomic_cmpxchg2o(p, f, e, n) \
+ dispatch_atomic_cmpxchg(&(p)->f, (e), (n))
+#define dispatch_atomic_xchg2o(p, f, n) \
+ dispatch_atomic_xchg(&(p)->f, (n))
+#define dispatch_atomic_add2o(p, f, v) \
+ dispatch_atomic_add(&(p)->f, (v))
+#define dispatch_atomic_sub2o(p, f, v) \
+ dispatch_atomic_sub(&(p)->f, (v))
+#define dispatch_atomic_or2o(p, f, v) \
+ dispatch_atomic_or(&(p)->f, (v))
+#define dispatch_atomic_and2o(p, f, v) \
+ dispatch_atomic_and(&(p)->f, (v))
+#define dispatch_atomic_inc2o(p, f) \
+ dispatch_atomic_add2o((p), f, 1)
+#define dispatch_atomic_dec2o(p, f) \
+ dispatch_atomic_sub2o((p), f, 1)
+
#else
#error "Please upgrade to GCC 4.2 or newer."
#endif
-#if defined(__i386__) || defined(__x86_64__)
-#define _dispatch_hardware_pause() asm("pause")
-#define _dispatch_debugger() asm("int3")
+#if defined(__x86_64__) || defined(__i386__)
+
+// GCC emits nothing for __sync_synchronize() on x86_64 & i386
+#undef _dispatch_atomic_barrier
+#define _dispatch_atomic_barrier() \
+ __asm__ __volatile__( \
+ "mfence" \
+ : : : "memory")
+#undef dispatch_atomic_maximally_synchronizing_barrier
+#ifdef __LP64__
+#define dispatch_atomic_maximally_synchronizing_barrier() \
+ do { unsigned long _clbr; __asm__ __volatile__( \
+ "cpuid" \
+ : "=a" (_clbr) : "0" (0) : "rbx", "rcx", "rdx", "cc", "memory" \
+ ); } while(0)
#else
-#define _dispatch_hardware_pause() asm("")
-#define _dispatch_debugger() asm("trap")
+#ifdef __llvm__
+#define dispatch_atomic_maximally_synchronizing_barrier() \
+ do { unsigned long _clbr; __asm__ __volatile__( \
+ "cpuid" \
+ : "=a" (_clbr) : "0" (0) : "ebx", "ecx", "edx", "cc", "memory" \
+ ); } while(0)
+#else // gcc does not allow inline i386 asm to clobber ebx
+#define dispatch_atomic_maximally_synchronizing_barrier() \
+ do { unsigned long _clbr; __asm__ __volatile__( \
+ "pushl %%ebx\n\t" \
+ "cpuid\n\t" \
+ "popl %%ebx" \
+ : "=a" (_clbr) : "0" (0) : "ecx", "edx", "cc", "memory" \
+ ); } while(0)
#endif
-// really just a low level abort()
-#define _dispatch_hardware_crash() __builtin_trap()
+#endif
+#undef _dispatch_hardware_pause
+#define _dispatch_hardware_pause() asm("pause")
+#undef _dispatch_debugger
+#define _dispatch_debugger() asm("int3")
+#elif defined(__ppc__) || defined(__ppc64__)
+
+// GCC emits "sync" for __sync_synchronize() on ppc & ppc64
+#undef _dispatch_atomic_barrier
+#ifdef __LP64__
+#define _dispatch_atomic_barrier() \
+ __asm__ __volatile__( \
+ "isync\n\t" \
+ "lwsync"
+ : : : "memory")
+#else
+#define _dispatch_atomic_barrier() \
+ __asm__ __volatile__( \
+ "isync\n\t" \
+ "eieio" \
+ : : : "memory")
+#endif
+#undef dispatch_atomic_maximally_synchronizing_barrier
+#define dispatch_atomic_maximally_synchronizing_barrier() \
+ __asm__ __volatile__( \
+ "sync" \
+ : : : "memory")
#endif
+
+
+#endif // __DISPATCH_SHIMS_ATOMIC__
diff --git a/src/shims/getprogname.h b/src/shims/getprogname.h
index c0e37d9..74aba13 100644
--- a/src/shims/getprogname.h
+++ b/src/shims/getprogname.h
@@ -22,20 +22,16 @@
#ifndef __DISPATCH_SHIMS_GETPROGNAME__
#define __DISPATCH_SHIMS_GETPROGNAME__
-#ifndef HAVE_GETPROGNAME
-
-static inline const char *
+#if !HAVE_GETPROGNAME
+static inline char *
getprogname(void)
{
# if HAVE_DECL_PROGRAM_INVOCATION_SHORT_NAME
- return program_invocation_short_name;
-#elif HAVE_GETEXECNAME
- return getexecname();
+ return program_invocation_short_name;
# else
# error getprogname(3) is not available on this platform
# endif
}
-
#endif /* HAVE_GETPROGNAME */
#endif /* __DISPATCH_SHIMS_GETPROGNAME__ */
diff --git a/src/shims/hw_config.h b/src/shims/hw_config.h
new file mode 100644
index 0000000..2d99759
--- /dev/null
+++ b/src/shims/hw_config.h
@@ -0,0 +1,106 @@
+/*
+ * Copyright (c) 2011 Apple Inc. All rights reserved.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_START@
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_END@
+ */
+
+/*
+ * IMPORTANT: This header file describes INTERNAL interfaces to libdispatch
+ * which are subject to change in future releases of Mac OS X. Any applications
+ * relying on these interfaces WILL break.
+ */
+
+#ifndef __DISPATCH_SHIMS_HW_CONFIG__
+#define __DISPATCH_SHIMS_HW_CONFIG__
+
+#if defined(__APPLE__)
+#define DISPATCH_SYSCTL_LOGICAL_CPUS "hw.logicalcpu_max"
+#define DISPATCH_SYSCTL_PHYSICAL_CPUS "hw.physicalcpu_max"
+#define DISPATCH_SYSCTL_ACTIVE_CPUS "hw.activecpu"
+#elif defined(__FreeBSD__)
+#define DISPATCH_SYSCTL_LOGICAL_CPUS "kern.smp.cpus"
+#define DISPATCH_SYSCTL_PHYSICAL_CPUS "kern.smp.cpus"
+#define DISPATCH_SYSCTL_ACTIVE_CPUS "kern.smp.cpus"
+#endif
+
+static inline uint32_t
+_dispatch_get_logicalcpu_max()
+{
+ uint32_t val = 1;
+#if defined(_COMM_PAGE_LOGICAL_CPUS)
+ uint8_t* u8val = (uint8_t*)(uintptr_t)_COMM_PAGE_LOGICAL_CPUS;
+ val = (uint32_t)*u8val;
+#elif defined(DISPATCH_SYSCTL_LOGICAL_CPUS)
+ size_t valsz = sizeof(val);
+ int ret = sysctlbyname(DISPATCH_SYSCTL_LOGICAL_CPUS,
+ &val, &valsz, NULL, 0);
+ (void)dispatch_assume_zero(ret);
+ (void)dispatch_assume(valsz == sizeof(uint32_t));
+#elif HAVE_SYSCONF && defined(_SC_NPROCESSORS_ONLN)
+ int ret = (int)sysconf(_SC_NPROCESSORS_ONLN);
+ val = ret < 0 ? 1 : ret;
+#else
+#warning "no supported way to query logical CPU count"
+#endif
+ return val;
+}
+
+static inline uint32_t
+_dispatch_get_physicalcpu_max()
+{
+ uint32_t val = 1;
+#if defined(_COMM_PAGE_PHYSICAL_CPUS)
+ uint8_t* u8val = (uint8_t*)(uintptr_t)_COMM_PAGE_PHYSICAL_CPUS;
+ val = (uint32_t)*u8val;
+#elif defined(DISPATCH_SYSCTL_PHYSICAL_CPUS)
+ size_t valsz = sizeof(val);
+ int ret = sysctlbyname(DISPATCH_SYSCTL_LOGICAL_CPUS,
+ &val, &valsz, NULL, 0);
+ (void)dispatch_assume_zero(ret);
+ (void)dispatch_assume(valsz == sizeof(uint32_t));
+#elif HAVE_SYSCONF && defined(_SC_NPROCESSORS_ONLN)
+ int ret = (int)sysconf(_SC_NPROCESSORS_ONLN);
+ val = ret < 0 ? 1 : ret;
+#else
+#warning "no supported way to query physical CPU count"
+#endif
+ return val;
+}
+
+static inline uint32_t
+_dispatch_get_activecpu()
+{
+ uint32_t val = 1;
+#if defined(_COMM_PAGE_ACTIVE_CPUS)
+ uint8_t* u8val = (uint8_t*)(uintptr_t)_COMM_PAGE_ACTIVE_CPUS;
+ val = (uint32_t)*u8val;
+#elif defined(DISPATCH_SYSCTL_ACTIVE_CPUS)
+ size_t valsz = sizeof(val);
+ int ret = sysctlbyname(DISPATCH_SYSCTL_ACTIVE_CPUS,
+ &val, &valsz, NULL, 0);
+ (void)dispatch_assume_zero(ret);
+ (void)dispatch_assume(valsz == sizeof(uint32_t));
+#elif HAVE_SYSCONF && defined(_SC_NPROCESSORS_ONLN)
+ int ret = (int)sysconf(_SC_NPROCESSORS_ONLN);
+ val = ret < 0 ? 1 : ret;
+#else
+#warning "no supported way to query active CPU count"
+#endif
+ return val;
+}
+
+#endif /* __DISPATCH_SHIMS_HW_CONFIG__ */
diff --git a/src/shims/malloc_zone.h b/src/shims/malloc_zone.h
index 54e49b8..3975b4f 100644
--- a/src/shims/malloc_zone.h
+++ b/src/shims/malloc_zone.h
@@ -2,19 +2,19 @@
* Copyright (c) 2009 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -35,58 +35,64 @@
static inline malloc_zone_t *
malloc_create_zone(size_t start_size, unsigned flags)
{
- return ((malloc_zone_t *)(-1));
+
+ return ((void *)(-1));
}
static inline void
malloc_destroy_zone(malloc_zone_t *zone)
{
- /* No-op. */
+
}
static inline malloc_zone_t *
malloc_default_zone(void)
{
- return ((malloc_zone_t *)(-1));
+
+ return ((void *)(-1));
}
static inline malloc_zone_t *
malloc_zone_from_ptr(const void *ptr)
{
- return ((malloc_zone_t *)(-1));
+
+ return ((void *)(-1));
}
static inline void *
malloc_zone_malloc(malloc_zone_t *zone, size_t size)
{
+
return (malloc(size));
}
static inline void *
malloc_zone_calloc(malloc_zone_t *zone, size_t num_items, size_t size)
{
+
return (calloc(num_items, size));
}
-#if !TARGET_OS_WIN32
static inline void *
malloc_zone_realloc(malloc_zone_t *zone, void *ptr, size_t size)
{
+
return (realloc(ptr, size));
}
-#endif
static inline void
malloc_zone_free(malloc_zone_t *zone, void *ptr)
{
+
free(ptr);
}
static inline void
malloc_set_zone_name(malloc_zone_t *zone, const char *name)
{
+
/* No-op. */
}
-#endif /* !HAVE_MALLOC_CREATE_ZONE */
+#endif
#endif /* __DISPATCH_SHIMS_MALLOC_ZONE__ */
diff --git a/src/shims/perfmon.h b/src/shims/perfmon.h
index 4a07ad1..bf5eb28 100644
--- a/src/shims/perfmon.h
+++ b/src/shims/perfmon.h
@@ -2,19 +2,19 @@
* Copyright (c) 2008-2009 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -29,29 +29,36 @@
#if DISPATCH_PERF_MON
-#if defined (USE_APPLE_TSD_OPTIMIZATIONS) && defined(SIMULATE_5491082) && (defined(__i386__) || defined(__x86_64__))
+#if defined (USE_APPLE_TSD_OPTIMIZATIONS) && defined(SIMULATE_5491082) && \
+ (defined(__i386__) || defined(__x86_64__))
#ifdef __LP64__
-#define _dispatch_workitem_inc() asm("incq %%gs:%0" : "+m" \
- (*(void **)(dispatch_bcounter_key * sizeof(void *) + _PTHREAD_TSD_OFFSET)) :: "cc")
-#define _dispatch_workitem_dec() asm("decq %%gs:%0" : "+m" \
- (*(void **)(dispatch_bcounter_key * sizeof(void *) + _PTHREAD_TSD_OFFSET)) :: "cc")
+#define _dispatch_workitem_inc() asm("incq %%gs:%0" : "+m" \
+ (*(void **)(dispatch_bcounter_key * sizeof(void *) + \
+ _PTHREAD_TSD_OFFSET)) :: "cc")
+#define _dispatch_workitem_dec() asm("decq %%gs:%0" : "+m" \
+ (*(void **)(dispatch_bcounter_key * sizeof(void *) + \
+ _PTHREAD_TSD_OFFSET)) :: "cc")
#else
-#define _dispatch_workitem_inc() asm("incl %%gs:%0" : "+m" \
- (*(void **)(dispatch_bcounter_key * sizeof(void *) + _PTHREAD_TSD_OFFSET)) :: "cc")
-#define _dispatch_workitem_dec() asm("decl %%gs:%0" : "+m" \
- (*(void **)(dispatch_bcounter_key * sizeof(void *) + _PTHREAD_TSD_OFFSET)) :: "cc")
+#define _dispatch_workitem_inc() asm("incl %%gs:%0" : "+m" \
+ (*(void **)(dispatch_bcounter_key * sizeof(void *) + \
+ _PTHREAD_TSD_OFFSET)) :: "cc")
+#define _dispatch_workitem_dec() asm("decl %%gs:%0" : "+m" \
+ (*(void **)(dispatch_bcounter_key * sizeof(void *) + \
+ _PTHREAD_TSD_OFFSET)) :: "cc")
#endif
#else /* !USE_APPLE_TSD_OPTIMIZATIONS */
static inline void
_dispatch_workitem_inc(void)
{
- unsigned long cnt = (unsigned long)_dispatch_thread_getspecific(dispatch_bcounter_key);
+ unsigned long cnt;
+ cnt = (unsigned long)_dispatch_thread_getspecific(dispatch_bcounter_key);
_dispatch_thread_setspecific(dispatch_bcounter_key, (void *)++cnt);
}
static inline void
_dispatch_workitem_dec(void)
{
- unsigned long cnt = (unsigned long)_dispatch_thread_getspecific(dispatch_bcounter_key);
+ unsigned long cnt;
+ cnt = (unsigned long)_dispatch_thread_getspecific(dispatch_bcounter_key);
_dispatch_thread_setspecific(dispatch_bcounter_key, (void *)--cnt);
}
#endif /* USE_APPLE_TSD_OPTIMIZATIONS */
@@ -85,6 +92,6 @@
#else
#define _dispatch_workitem_inc()
#define _dispatch_workitem_dec()
-#endif // DISPATCH_PERF_MON
+#endif // DISPATCH_PERF_MON
-#endif /* __DISPATCH_SHIMS_PERFMON__ */
+#endif
diff --git a/src/shims/time.h b/src/shims/time.h
index aa574db..9ae9160 100644
--- a/src/shims/time.h
+++ b/src/shims/time.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -33,27 +33,19 @@
uint64_t _dispatch_get_nanoseconds(void);
-#if TARGET_OS_WIN32
-static inline unsigned int
-sleep(unsigned int seconds)
-{
- Sleep(seconds * 1000); // milliseconds
- return 0;
-}
-#endif
-
-#if (defined(__i386__) || defined(__x86_64__)) && HAVE_MACH_ABSOLUTE_TIME
-// x86 currently implements mach time in nanoseconds; this is NOT likely to change
-#define _dispatch_time_mach2nano(x) (x)
-#define _dispatch_time_nano2mach(x) (x)
+#if defined(__i386__) || defined(__x86_64__) || !HAVE_MACH_ABSOLUTE_TIME
+// x86 currently implements mach time in nanoseconds
+// this is NOT likely to change
+#define _dispatch_time_mach2nano(x) ({x;})
+#define _dispatch_time_nano2mach(x) ({x;})
#else
typedef struct _dispatch_host_time_data_s {
long double frac;
bool ratio_1_to_1;
dispatch_once_t pred;
} _dispatch_host_time_data_s;
-__private_extern__ _dispatch_host_time_data_s _dispatch_host_time_data;
-__private_extern__ void _dispatch_get_host_time_init(void *context);
+extern _dispatch_host_time_data_s _dispatch_host_time_data;
+void _dispatch_get_host_time_init(void *context);
static inline uint64_t
_dispatch_time_mach2nano(uint64_t machtime)
@@ -61,7 +53,7 @@
_dispatch_host_time_data_s *const data = &_dispatch_host_time_data;
dispatch_once_f(&data->pred, NULL, _dispatch_get_host_time_init);
- return (uint64_t)(machtime * data->frac);
+ return machtime * data->frac;
}
static inline int64_t
@@ -74,7 +66,7 @@
return nsec;
}
- long double big_tmp = (long double)nsec;
+ long double big_tmp = nsec;
// Divide by tbi.numer/tbi.denom to convert nsec to Mach absolute time
big_tmp /= data->frac;
@@ -86,22 +78,14 @@
if (slowpath(big_tmp < INT64_MIN)) {
return INT64_MIN;
}
- return (int64_t)big_tmp;
+ return big_tmp;
}
#endif
static inline uint64_t
_dispatch_absolute_time(void)
{
-#if HAVE_MACH_ABSOLUTE_TIME
- return mach_absolute_time();
-#elif TARGET_OS_WIN32
- LARGE_INTEGER now;
- if (!QueryPerformanceCounter(&now)) {
- return 0;
- }
- return now.QuadPart;
-#else
+#if !HAVE_MACH_ABSOLUTE_TIME
struct timespec ts;
int ret;
@@ -116,7 +100,9 @@
/* XXXRW: Some kind of overflow detection needed? */
return (ts.tv_sec * NSEC_PER_SEC + ts.tv_nsec);
+#else
+ return mach_absolute_time();
#endif
}
-#endif /* __DISPATCH_SHIMS_TIME__ */
+#endif
diff --git a/src/shims/tsd.h b/src/shims/tsd.h
index 7652e23..b8c6640 100644
--- a/src/shims/tsd.h
+++ b/src/shims/tsd.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -27,94 +27,78 @@
#ifndef __DISPATCH_SHIMS_TSD__
#define __DISPATCH_SHIMS_TSD__
-#if HAVE_PTHREAD_KEY_INIT_NP
-static const unsigned long dispatch_queue_key = __PTK_LIBDISPATCH_KEY0;
-static const unsigned long dispatch_sema4_key = __PTK_LIBDISPATCH_KEY1;
-static const unsigned long dispatch_cache_key = __PTK_LIBDISPATCH_KEY2;
-static const unsigned long dispatch_bcounter_key = __PTK_LIBDISPATCH_KEY3;
-//__PTK_LIBDISPATCH_KEY4
+#if HAVE_PTHREAD_MACHDEP_H
+#include <pthread_machdep.h>
+#endif
+
+#define DISPATCH_TSD_INLINE DISPATCH_ALWAYS_INLINE_NDEBUG
+
+#if USE_APPLE_TSD_OPTIMIZATIONS && HAVE_PTHREAD_KEY_INIT_NP && \
+ !defined(DISPATCH_USE_DIRECT_TSD)
+#define DISPATCH_USE_DIRECT_TSD 1
+#endif
+
+#if DISPATCH_USE_DIRECT_TSD
+static const unsigned long dispatch_queue_key = __PTK_LIBDISPATCH_KEY0;
+static const unsigned long dispatch_sema4_key = __PTK_LIBDISPATCH_KEY1;
+static const unsigned long dispatch_cache_key = __PTK_LIBDISPATCH_KEY2;
+static const unsigned long dispatch_io_key = __PTK_LIBDISPATCH_KEY3;
+static const unsigned long dispatch_apply_key = __PTK_LIBDISPATCH_KEY4;
+static const unsigned long dispatch_bcounter_key = __PTK_LIBDISPATCH_KEY5;
//__PTK_LIBDISPATCH_KEY5
-#else
-extern pthread_key_t dispatch_queue_key;
-extern pthread_key_t dispatch_sema4_key;
-extern pthread_key_t dispatch_cache_key;
-extern pthread_key_t dispatch_bcounter_key;
-#endif
-#if USE_APPLE_TSD_OPTIMIZATIONS
-#define SIMULATE_5491082 1
-#ifndef _PTHREAD_TSD_OFFSET
-#define _PTHREAD_TSD_OFFSET 0
-#endif
-
+DISPATCH_TSD_INLINE
static inline void
-_dispatch_thread_setspecific(unsigned long k, void *v)
+_dispatch_thread_key_create(const unsigned long *k, void (*d)(void *))
{
-#if defined(SIMULATE_5491082) && defined(__i386__)
- asm("movl %1, %%gs:%0" : "=m" (*(void **)(k * sizeof(void *) + _PTHREAD_TSD_OFFSET)) : "ri" (v) : "memory");
-#elif defined(SIMULATE_5491082) && defined(__x86_64__)
- asm("movq %1, %%gs:%0" : "=m" (*(void **)(k * sizeof(void *) + _PTHREAD_TSD_OFFSET)) : "rn" (v) : "memory");
-#else
- int res;
- if (_pthread_has_direct_tsd()) {
- res = _pthread_setspecific_direct(k, v);
- } else {
- res = pthread_setspecific(k, v);
- }
- dispatch_assert_zero(res);
-#endif
+ dispatch_assert_zero(pthread_key_init_np((int)*k, d));
}
+#else
+pthread_key_t dispatch_queue_key;
+pthread_key_t dispatch_sema4_key;
+pthread_key_t dispatch_cache_key;
+pthread_key_t dispatch_io_key;
+pthread_key_t dispatch_apply_key;
+pthread_key_t dispatch_bcounter_key;
-static inline void *
-_dispatch_thread_getspecific(unsigned long k)
+DISPATCH_TSD_INLINE
+static inline void
+_dispatch_thread_key_create(pthread_key_t *k, void (*d)(void *))
{
-#if defined(SIMULATE_5491082) && (defined(__i386__) || defined(__x86_64__))
- void *rval;
- asm("mov %%gs:%1, %0" : "=r" (rval) : "m" (*(void **)(k * sizeof(void *) + _PTHREAD_TSD_OFFSET)));
- return rval;
-#else
- if (_pthread_has_direct_tsd()) {
- return _pthread_getspecific_direct(k);
- } else {
- return pthread_getspecific(k);
- }
-#endif
+ dispatch_assert_zero(pthread_key_create(k, d));
}
+#endif
-#else /* !USE_APPLE_TSD_OPTIMIZATIONS */
-
+#if DISPATCH_USE_TSD_BASE && !DISPATCH_DEBUG
+#else // DISPATCH_USE_TSD_BASE
+DISPATCH_TSD_INLINE
static inline void
_dispatch_thread_setspecific(pthread_key_t k, void *v)
{
- int res;
-
- res = pthread_setspecific(k, v);
- dispatch_assert_zero(res);
+#if DISPATCH_USE_DIRECT_TSD
+ if (_pthread_has_direct_tsd()) {
+ (void)_pthread_setspecific_direct(k, v);
+ return;
+ }
+#endif
+ dispatch_assert_zero(pthread_setspecific(k, v));
}
+DISPATCH_TSD_INLINE
static inline void *
_dispatch_thread_getspecific(pthread_key_t k)
{
-
+#if DISPATCH_USE_DIRECT_TSD
+ if (_pthread_has_direct_tsd()) {
+ return _pthread_getspecific_direct(k);
+ }
+#endif
return pthread_getspecific(k);
}
-#endif /* USE_APPLE_TSD_OPTIMIZATIONS */
-
-#if HAVE_PTHREAD_KEY_INIT_NP
-static inline void
-_dispatch_thread_key_init_np(unsigned long k, void (*d)(void *))
-{
- dispatch_assert_zero(pthread_key_init_np((int)k, d));
-}
-#else
-static inline void
-_dispatch_thread_key_create(pthread_key_t *key, void (*destructor)(void *))
-{
-
- dispatch_assert_zero(pthread_key_create(key, destructor));
-}
-#endif
+#endif // DISPATCH_USE_TSD_BASE
#define _dispatch_thread_self (uintptr_t)pthread_self
-#endif /* __DISPATCH_SHIMS_TSD__ */
+#undef DISPATCH_TSD_INLINE
+
+#endif
diff --git a/src/source.c b/src/source.c
index b6806b8..cf612aa 100644
--- a/src/source.c
+++ b/src/source.c
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -25,58 +25,176 @@
#endif
#include <sys/mount.h>
-#ifndef DISPATCH_NO_LEGACY
-struct dispatch_source_attr_vtable_s {
- DISPATCH_VTABLE_HEADER(dispatch_source_attr_s);
+static void _dispatch_source_dispose(dispatch_source_t ds);
+static dispatch_queue_t _dispatch_source_invoke(dispatch_source_t ds);
+static bool _dispatch_source_probe(dispatch_source_t ds);
+static void _dispatch_source_merge_kevent(dispatch_source_t ds,
+ const struct kevent *ke);
+static void _dispatch_kevent_register(dispatch_source_t ds);
+static void _dispatch_kevent_unregister(dispatch_source_t ds);
+static bool _dispatch_kevent_resume(dispatch_kevent_t dk, uint32_t new_flags,
+ uint32_t del_flags);
+static inline void _dispatch_source_timer_init(void);
+static void _dispatch_timer_list_update(dispatch_source_t ds);
+static inline unsigned long _dispatch_source_timer_data(
+ dispatch_source_refs_t dr, unsigned long prev);
+#if HAVE_MACH
+static kern_return_t _dispatch_kevent_machport_resume(dispatch_kevent_t dk,
+ uint32_t new_flags, uint32_t del_flags);
+static void _dispatch_drain_mach_messages(struct kevent *ke);
+#endif
+static size_t _dispatch_source_kevent_debug(dispatch_source_t ds,
+ char* buf, size_t bufsiz);
+#if DISPATCH_DEBUG
+static void _dispatch_kevent_debugger(void *context);
+#endif
+
+#pragma mark -
+#pragma mark dispatch_source_t
+
+const struct dispatch_source_vtable_s _dispatch_source_kevent_vtable = {
+ .do_type = DISPATCH_SOURCE_KEVENT_TYPE,
+ .do_kind = "kevent-source",
+ .do_invoke = _dispatch_source_invoke,
+ .do_dispose = _dispatch_source_dispose,
+ .do_probe = _dispatch_source_probe,
+ .do_debug = _dispatch_source_kevent_debug,
};
-struct dispatch_source_attr_s {
- DISPATCH_STRUCT_HEADER(dispatch_source_attr_s, dispatch_source_attr_vtable_s);
- void* finalizer_ctxt;
- dispatch_source_finalizer_function_t finalizer_func;
- void* context;
-};
-#endif /* DISPATCH_NO_LEGACY */
+dispatch_source_t
+dispatch_source_create(dispatch_source_type_t type,
+ uintptr_t handle,
+ unsigned long mask,
+ dispatch_queue_t q)
+{
+ const struct kevent *proto_kev = &type->ke;
+ dispatch_source_t ds = NULL;
+ dispatch_kevent_t dk = NULL;
-#define _dispatch_source_call_block ((void *)-1)
-static void _dispatch_source_latch_and_call(dispatch_source_t ds);
-static void _dispatch_source_cancel_callout(dispatch_source_t ds);
-static size_t dispatch_source_debug_attr(dispatch_source_t ds, char* buf, size_t bufsiz);
+ // input validation
+ if (type == NULL || (mask & ~type->mask)) {
+ goto out_bad;
+ }
+
+ switch (type->ke.filter) {
+ case EVFILT_SIGNAL:
+ if (handle >= NSIG) {
+ goto out_bad;
+ }
+ break;
+ case EVFILT_FS:
+#if DISPATCH_USE_VM_PRESSURE
+ case EVFILT_VM:
+#endif
+ case DISPATCH_EVFILT_CUSTOM_ADD:
+ case DISPATCH_EVFILT_CUSTOM_OR:
+ case DISPATCH_EVFILT_TIMER:
+ if (handle) {
+ goto out_bad;
+ }
+ break;
+ default:
+ break;
+ }
+
+ ds = calloc(1ul, sizeof(struct dispatch_source_s));
+ if (slowpath(!ds)) {
+ goto out_bad;
+ }
+ dk = calloc(1ul, sizeof(struct dispatch_kevent_s));
+ if (slowpath(!dk)) {
+ goto out_bad;
+ }
+
+ dk->dk_kevent = *proto_kev;
+ dk->dk_kevent.ident = handle;
+ dk->dk_kevent.flags |= EV_ADD|EV_ENABLE;
+ dk->dk_kevent.fflags |= (uint32_t)mask;
+ dk->dk_kevent.udata = dk;
+ TAILQ_INIT(&dk->dk_sources);
+
+ // Initialize as a queue first, then override some settings below.
+ _dispatch_queue_init((dispatch_queue_t)ds);
+ strlcpy(ds->dq_label, "source", sizeof(ds->dq_label));
+
+ // Dispatch Object
+ ds->do_vtable = &_dispatch_source_kevent_vtable;
+ ds->do_ref_cnt++; // the reference the manger queue holds
+ ds->do_ref_cnt++; // since source is created suspended
+ ds->do_suspend_cnt = DISPATCH_OBJECT_SUSPEND_INTERVAL;
+ // The initial target queue is the manager queue, in order to get
+ // the source installed. <rdar://problem/8928171>
+ ds->do_targetq = &_dispatch_mgr_q;
+
+ // Dispatch Source
+ ds->ds_ident_hack = dk->dk_kevent.ident;
+ ds->ds_dkev = dk;
+ ds->ds_pending_data_mask = dk->dk_kevent.fflags;
+ if ((EV_DISPATCH|EV_ONESHOT) & proto_kev->flags) {
+ ds->ds_is_level = true;
+ ds->ds_needs_rearm = true;
+ } else if (!(EV_CLEAR & proto_kev->flags)) {
+ // we cheat and use EV_CLEAR to mean a "flag thingy"
+ ds->ds_is_adder = true;
+ }
+
+ // Some sources require special processing
+ if (type->init != NULL) {
+ type->init(ds, type, handle, mask, q);
+ }
+ if (fastpath(!ds->ds_refs)) {
+ ds->ds_refs = calloc(1ul, sizeof(struct dispatch_source_refs_s));
+ if (slowpath(!ds->ds_refs)) {
+ goto out_bad;
+ }
+ }
+ ds->ds_refs->dr_source_wref = _dispatch_ptr2wref(ds);
+ dispatch_assert(!(ds->ds_is_level && ds->ds_is_adder));
+
+ // First item on the queue sets the user-specified target queue
+ dispatch_set_target_queue(ds, q);
+#if DISPATCH_DEBUG
+ dispatch_debug(ds, "%s", __FUNCTION__);
+#endif
+ return ds;
+
+out_bad:
+ free(ds);
+ free(dk);
+ return NULL;
+}
+
+static void
+_dispatch_source_dispose(dispatch_source_t ds)
+{
+ free(ds->ds_refs);
+ _dispatch_queue_dispose((dispatch_queue_t)ds);
+}
+
+void
+_dispatch_source_xref_release(dispatch_source_t ds)
+{
+ if (slowpath(DISPATCH_OBJECT_SUSPENDED(ds))) {
+ // Arguments for and against this assert are within 6705399
+ DISPATCH_CLIENT_CRASH("Release of a suspended object");
+ }
+ _dispatch_wakeup(ds);
+ _dispatch_release(ds);
+}
void
dispatch_source_cancel(dispatch_source_t ds)
{
#if DISPATCH_DEBUG
- dispatch_debug(ds, __FUNCTION__);
+ dispatch_debug(ds, "%s", __FUNCTION__);
#endif
// Right after we set the cancel flag, someone else
- // could potentially invoke the source, do the cancelation,
+ // could potentially invoke the source, do the cancelation,
// unregister the source, and deallocate it. We would
// need to therefore retain/release before setting the bit
_dispatch_retain(ds);
- dispatch_atomic_or(&ds->ds_atomic_flags, DSF_CANCELED);
- _dispatch_wakeup(ds);
- _dispatch_release(ds);
-}
-
-DISPATCH_NOINLINE
-void
-_dispatch_source_xref_release(dispatch_source_t ds)
-{
-#ifndef DISPATCH_NO_LEGACY
- if (ds->ds_is_legacy) {
- if (!(ds->ds_timer.flags & DISPATCH_TIMER_ONESHOT)) {
- dispatch_source_cancel(ds);
- }
- // Clients often leave sources suspended at the last release
- dispatch_atomic_and(&ds->do_suspend_cnt, DISPATCH_OBJECT_SUSPEND_LOCK);
- } else
-#endif
- if (slowpath(DISPATCH_OBJECT_SUSPENDED(ds))) {
- // Arguments for and against this assert are within 6705399
- DISPATCH_CLIENT_CRASH("Release of a suspended object");
- }
+ (void)dispatch_atomic_or2o(ds, ds_atomic_flags, DSF_CANCELED);
_dispatch_wakeup(ds);
_dispatch_release(ds);
}
@@ -106,7 +224,291 @@
return ds->ds_data;
}
-dispatch_queue_t
+void
+dispatch_source_merge_data(dispatch_source_t ds, unsigned long val)
+{
+ struct kevent kev = {
+ .fflags = (typeof(kev.fflags))val,
+ .data = val,
+ };
+
+ dispatch_assert(
+ ds->ds_dkev->dk_kevent.filter == DISPATCH_EVFILT_CUSTOM_ADD ||
+ ds->ds_dkev->dk_kevent.filter == DISPATCH_EVFILT_CUSTOM_OR);
+
+ _dispatch_source_merge_kevent(ds, &kev);
+}
+
+#pragma mark -
+#pragma mark dispatch_source_handler
+
+#ifdef __BLOCKS__
+// 6618342 Contact the team that owns the Instrument DTrace probe before
+// renaming this symbol
+static void
+_dispatch_source_set_event_handler2(void *context)
+{
+ struct Block_layout *bl = context;
+
+ dispatch_source_t ds = (dispatch_source_t)_dispatch_queue_get_current();
+ dispatch_assert(ds->do_vtable == &_dispatch_source_kevent_vtable);
+ dispatch_source_refs_t dr = ds->ds_refs;
+
+ if (ds->ds_handler_is_block && dr->ds_handler_ctxt) {
+ Block_release(dr->ds_handler_ctxt);
+ }
+ dr->ds_handler_func = bl ? (void *)bl->invoke : NULL;
+ dr->ds_handler_ctxt = bl;
+ ds->ds_handler_is_block = true;
+}
+
+void
+dispatch_source_set_event_handler(dispatch_source_t ds,
+ dispatch_block_t handler)
+{
+ handler = _dispatch_Block_copy(handler);
+ dispatch_barrier_async_f((dispatch_queue_t)ds, handler,
+ _dispatch_source_set_event_handler2);
+}
+#endif /* __BLOCKS__ */
+
+static void
+_dispatch_source_set_event_handler_f(void *context)
+{
+ dispatch_source_t ds = (dispatch_source_t)_dispatch_queue_get_current();
+ dispatch_assert(ds->do_vtable == &_dispatch_source_kevent_vtable);
+ dispatch_source_refs_t dr = ds->ds_refs;
+
+#ifdef __BLOCKS__
+ if (ds->ds_handler_is_block && dr->ds_handler_ctxt) {
+ Block_release(dr->ds_handler_ctxt);
+ }
+#endif
+ dr->ds_handler_func = context;
+ dr->ds_handler_ctxt = ds->do_ctxt;
+ ds->ds_handler_is_block = false;
+}
+
+void
+dispatch_source_set_event_handler_f(dispatch_source_t ds,
+ dispatch_function_t handler)
+{
+ dispatch_barrier_async_f((dispatch_queue_t)ds, handler,
+ _dispatch_source_set_event_handler_f);
+}
+
+#ifdef __BLOCKS__
+// 6618342 Contact the team that owns the Instrument DTrace probe before
+// renaming this symbol
+static void
+_dispatch_source_set_cancel_handler2(void *context)
+{
+ dispatch_source_t ds = (dispatch_source_t)_dispatch_queue_get_current();
+ dispatch_assert(ds->do_vtable == &_dispatch_source_kevent_vtable);
+ dispatch_source_refs_t dr = ds->ds_refs;
+
+ if (ds->ds_cancel_is_block && dr->ds_cancel_handler) {
+ Block_release(dr->ds_cancel_handler);
+ }
+ dr->ds_cancel_handler = context;
+ ds->ds_cancel_is_block = true;
+}
+
+void
+dispatch_source_set_cancel_handler(dispatch_source_t ds,
+ dispatch_block_t handler)
+{
+ handler = _dispatch_Block_copy(handler);
+ dispatch_barrier_async_f((dispatch_queue_t)ds, handler,
+ _dispatch_source_set_cancel_handler2);
+}
+#endif /* __BLOCKS__ */
+
+static void
+_dispatch_source_set_cancel_handler_f(void *context)
+{
+ dispatch_source_t ds = (dispatch_source_t)_dispatch_queue_get_current();
+ dispatch_assert(ds->do_vtable == &_dispatch_source_kevent_vtable);
+ dispatch_source_refs_t dr = ds->ds_refs;
+
+#ifdef __BLOCKS__
+ if (ds->ds_cancel_is_block && dr->ds_cancel_handler) {
+ Block_release(dr->ds_cancel_handler);
+ }
+#endif
+ dr->ds_cancel_handler = context;
+ ds->ds_cancel_is_block = false;
+}
+
+void
+dispatch_source_set_cancel_handler_f(dispatch_source_t ds,
+ dispatch_function_t handler)
+{
+ dispatch_barrier_async_f((dispatch_queue_t)ds, handler,
+ _dispatch_source_set_cancel_handler_f);
+}
+
+#ifdef __BLOCKS__
+static void
+_dispatch_source_set_registration_handler2(void *context)
+{
+ dispatch_source_t ds = (dispatch_source_t)_dispatch_queue_get_current();
+ dispatch_assert(ds->do_vtable == &_dispatch_source_kevent_vtable);
+ dispatch_source_refs_t dr = ds->ds_refs;
+
+ if (ds->ds_registration_is_block && dr->ds_registration_handler) {
+ Block_release(dr->ds_registration_handler);
+ }
+ dr->ds_registration_handler = context;
+ ds->ds_registration_is_block = true;
+}
+
+void
+dispatch_source_set_registration_handler(dispatch_source_t ds,
+ dispatch_block_t handler)
+{
+ handler = _dispatch_Block_copy(handler);
+ dispatch_barrier_async_f((dispatch_queue_t)ds, handler,
+ _dispatch_source_set_registration_handler2);
+}
+#endif /* __BLOCKS__ */
+
+static void
+_dispatch_source_set_registration_handler_f(void *context)
+{
+ dispatch_source_t ds = (dispatch_source_t)_dispatch_queue_get_current();
+ dispatch_assert(ds->do_vtable == &_dispatch_source_kevent_vtable);
+ dispatch_source_refs_t dr = ds->ds_refs;
+
+#ifdef __BLOCKS__
+ if (ds->ds_registration_is_block && dr->ds_registration_handler) {
+ Block_release(dr->ds_registration_handler);
+ }
+#endif
+ dr->ds_registration_handler = context;
+ ds->ds_registration_is_block = false;
+}
+
+void
+dispatch_source_set_registration_handler_f(dispatch_source_t ds,
+ dispatch_function_t handler)
+{
+ dispatch_barrier_async_f((dispatch_queue_t)ds, handler,
+ _dispatch_source_set_registration_handler_f);
+}
+
+#pragma mark -
+#pragma mark dispatch_source_invoke
+
+static void
+_dispatch_source_registration_callout(dispatch_source_t ds)
+{
+ dispatch_source_refs_t dr = ds->ds_refs;
+
+ if ((ds->ds_atomic_flags & DSF_CANCELED) || (ds->do_xref_cnt == 0)) {
+ // no registration callout if source is canceled rdar://problem/8955246
+#ifdef __BLOCKS__
+ if (ds->ds_registration_is_block) {
+ Block_release(dr->ds_registration_handler);
+ }
+ } else if (ds->ds_registration_is_block) {
+ dispatch_block_t b = dr->ds_registration_handler;
+ _dispatch_client_callout_block(b);
+ Block_release(dr->ds_registration_handler);
+#endif
+ } else {
+ dispatch_function_t f = dr->ds_registration_handler;
+ _dispatch_client_callout(ds->do_ctxt, f);
+ }
+ ds->ds_registration_is_block = false;
+ dr->ds_registration_handler = NULL;
+}
+
+static void
+_dispatch_source_cancel_callout(dispatch_source_t ds)
+{
+ dispatch_source_refs_t dr = ds->ds_refs;
+
+ ds->ds_pending_data_mask = 0;
+ ds->ds_pending_data = 0;
+ ds->ds_data = 0;
+
+#ifdef __BLOCKS__
+ if (ds->ds_handler_is_block) {
+ Block_release(dr->ds_handler_ctxt);
+ ds->ds_handler_is_block = false;
+ dr->ds_handler_func = NULL;
+ dr->ds_handler_ctxt = NULL;
+ }
+ if (ds->ds_registration_is_block) {
+ Block_release(dr->ds_registration_handler);
+ ds->ds_registration_is_block = false;
+ dr->ds_registration_handler = NULL;
+ }
+#endif
+
+ if (!dr->ds_cancel_handler) {
+ return;
+ }
+ if (ds->ds_cancel_is_block) {
+#ifdef __BLOCKS__
+ dispatch_block_t b = dr->ds_cancel_handler;
+ if (ds->ds_atomic_flags & DSF_CANCELED) {
+ _dispatch_client_callout_block(b);
+ }
+ Block_release(dr->ds_cancel_handler);
+ ds->ds_cancel_is_block = false;
+#endif
+ } else {
+ dispatch_function_t f = dr->ds_cancel_handler;
+ if (ds->ds_atomic_flags & DSF_CANCELED) {
+ _dispatch_client_callout(ds->do_ctxt, f);
+ }
+ }
+ dr->ds_cancel_handler = NULL;
+}
+
+static void
+_dispatch_source_latch_and_call(dispatch_source_t ds)
+{
+ unsigned long prev;
+
+ if ((ds->ds_atomic_flags & DSF_CANCELED) || (ds->do_xref_cnt == 0)) {
+ return;
+ }
+ dispatch_source_refs_t dr = ds->ds_refs;
+ prev = dispatch_atomic_xchg2o(ds, ds_pending_data, 0);
+ if (ds->ds_is_level) {
+ ds->ds_data = ~prev;
+ } else if (ds->ds_is_timer && ds_timer(dr).target && prev) {
+ ds->ds_data = _dispatch_source_timer_data(dr, prev);
+ } else {
+ ds->ds_data = prev;
+ }
+ if (dispatch_assume(prev) && dr->ds_handler_func) {
+ _dispatch_client_callout(dr->ds_handler_ctxt, dr->ds_handler_func);
+ }
+}
+
+static void
+_dispatch_source_kevent_resume(dispatch_source_t ds, uint32_t new_flags)
+{
+ switch (ds->ds_dkev->dk_kevent.filter) {
+ case DISPATCH_EVFILT_TIMER:
+ // called on manager queue only
+ return _dispatch_timer_list_update(ds);
+ case EVFILT_MACHPORT:
+ if (ds->ds_pending_data_mask & DISPATCH_MACH_RECV_MESSAGE) {
+ new_flags |= DISPATCH_MACH_RECV_MESSAGE; // emulate EV_DISPATCH
+ }
+ break;
+ }
+ if (_dispatch_kevent_resume(ds->ds_dkev, new_flags, 0)) {
+ _dispatch_kevent_unregister(ds);
+ }
+}
+
+static dispatch_queue_t
_dispatch_source_invoke(dispatch_source_t ds)
{
// This function performs all source actions. Each action is responsible
@@ -115,15 +517,36 @@
// will be returned and the invoke will be re-driven on that queue.
// The order of tests here in invoke and in probe should be consistent.
-
+
dispatch_queue_t dq = _dispatch_queue_get_current();
+ dispatch_source_refs_t dr = ds->ds_refs;
if (!ds->ds_is_installed) {
// The source needs to be installed on the manager queue.
if (dq != &_dispatch_mgr_q) {
return &_dispatch_mgr_q;
}
- _dispatch_kevent_merge(ds);
+ _dispatch_kevent_register(ds);
+ if (dr->ds_registration_handler) {
+ return ds->do_targetq;
+ }
+ if (slowpath(ds->do_xref_cnt == 0)) {
+ return &_dispatch_mgr_q; // rdar://problem/9558246
+ }
+ } else if (slowpath(DISPATCH_OBJECT_SUSPENDED(ds))) {
+ // Source suspended by an item drained from the source queue.
+ return NULL;
+ } else if (dr->ds_registration_handler) {
+ // The source has been registered and the registration handler needs
+ // to be delivered on the target queue.
+ if (dq != ds->do_targetq) {
+ return ds->do_targetq;
+ }
+ // clears ds_registration_handler
+ _dispatch_source_registration_callout(ds);
+ if (slowpath(ds->do_xref_cnt == 0)) {
+ return &_dispatch_mgr_q; // rdar://problem/9558246
+ }
} else if ((ds->ds_atomic_flags & DSF_CANCELED) || (ds->do_xref_cnt == 0)) {
// The source has been cancelled and needs to be uninstalled from the
// manager queue. After uninstallation, the cancellation handler needs
@@ -132,13 +555,13 @@
if (dq != &_dispatch_mgr_q) {
return &_dispatch_mgr_q;
}
- _dispatch_kevent_release(ds);
+ _dispatch_kevent_unregister(ds);
return ds->do_targetq;
- } else if (ds->ds_cancel_handler) {
+ } else if (dr->ds_cancel_handler) {
if (dq != ds->do_targetq) {
return ds->do_targetq;
}
- }
+ }
_dispatch_source_cancel_callout(ds);
} else if (ds->ds_pending_data) {
// The source has pending data to deliver via the event handler callback
@@ -151,38 +574,42 @@
if (ds->ds_needs_rearm) {
return &_dispatch_mgr_q;
}
- } else if (ds->ds_needs_rearm && !ds->ds_is_armed) {
+ } else if (ds->ds_needs_rearm && !(ds->ds_atomic_flags & DSF_ARMED)) {
// The source needs to be rearmed on the manager queue.
if (dq != &_dispatch_mgr_q) {
return &_dispatch_mgr_q;
}
- _dispatch_source_kevent_resume(ds, 0, 0);
- ds->ds_is_armed = true;
+ _dispatch_source_kevent_resume(ds, 0);
+ (void)dispatch_atomic_or2o(ds, ds_atomic_flags, DSF_ARMED);
}
return NULL;
}
-bool
+static bool
_dispatch_source_probe(dispatch_source_t ds)
{
// This function determines whether the source needs to be invoked.
// The order of tests here in probe and in invoke should be consistent.
+ dispatch_source_refs_t dr = ds->ds_refs;
if (!ds->ds_is_installed) {
// The source needs to be installed on the manager queue.
return true;
+ } else if (dr->ds_registration_handler) {
+ // The registration handler needs to be delivered to the target queue.
+ return true;
} else if ((ds->ds_atomic_flags & DSF_CANCELED) || (ds->do_xref_cnt == 0)) {
// The source needs to be uninstalled from the manager queue, or the
// cancellation handler needs to be delivered to the target queue.
// Note: cancellation assumes installation.
- if (ds->ds_dkev || ds->ds_cancel_handler) {
+ if (ds->ds_dkev || dr->ds_cancel_handler) {
return true;
}
} else if (ds->ds_pending_data) {
// The source has pending data to deliver to the target queue.
return true;
- } else if (ds->ds_needs_rearm && !ds->ds_is_armed) {
+ } else if (ds->ds_needs_rearm && !(ds->ds_atomic_flags & DSF_ARMED)) {
// The source needs to be rearmed on the manager queue.
return true;
}
@@ -190,459 +617,601 @@
return false;
}
-void
-_dispatch_source_dispose(dispatch_source_t ds)
-{
- _dispatch_queue_dispose((dispatch_queue_t)ds);
-}
+#pragma mark -
+#pragma mark dispatch_source_kevent
-void
-_dispatch_source_latch_and_call(dispatch_source_t ds)
+static void
+_dispatch_source_merge_kevent(dispatch_source_t ds, const struct kevent *ke)
{
- unsigned long prev;
+ struct kevent fake;
if ((ds->ds_atomic_flags & DSF_CANCELED) || (ds->do_xref_cnt == 0)) {
return;
}
- prev = dispatch_atomic_xchg(&ds->ds_pending_data, 0);
- if (ds->ds_is_level) {
- ds->ds_data = ~prev;
- } else {
- ds->ds_data = prev;
- }
- if (dispatch_assume(prev)) {
- if (ds->ds_handler_func) {
-#ifndef DISPATCH_NO_LEGACY
- ((dispatch_source_handler_function_t)ds->ds_handler_func)(ds->ds_handler_ctxt, ds);
-#else
- ds->ds_handler_func(ds->ds_handler_ctxt);
+
+ // EVFILT_PROC may fail with ESRCH when the process exists but is a zombie
+ // <rdar://problem/5067725>. As a workaround, we simulate an exit event for
+ // any EVFILT_PROC with an invalid pid <rdar://problem/6626350>.
+ if (ke->flags & EV_ERROR) {
+ if (ke->filter == EVFILT_PROC && ke->data == ESRCH) {
+ fake = *ke;
+ fake.flags &= ~EV_ERROR;
+ fake.fflags = NOTE_EXIT;
+ fake.data = 0;
+ ke = &fake;
+#if DISPATCH_USE_VM_PRESSURE
+ } else if (ke->filter == EVFILT_VM && ke->data == ENOTSUP) {
+ // Memory pressure kevent is not supported on all platforms
+ // <rdar://problem/8636227>
+ return;
#endif
+ } else {
+ // log the unexpected error
+ (void)dispatch_assume_zero(ke->data);
+ return;
}
}
+
+ if (ds->ds_is_level) {
+ // ke->data is signed and "negative available data" makes no sense
+ // zero bytes happens when EV_EOF is set
+ // 10A268 does not fail this assert with EVFILT_READ and a 10 GB file
+ dispatch_assert(ke->data >= 0l);
+ ds->ds_pending_data = ~ke->data;
+ } else if (ds->ds_is_adder) {
+ (void)dispatch_atomic_add2o(ds, ds_pending_data, ke->data);
+ } else if (ke->fflags & ds->ds_pending_data_mask) {
+ (void)dispatch_atomic_or2o(ds, ds_pending_data,
+ ke->fflags & ds->ds_pending_data_mask);
+ }
+
+ // EV_DISPATCH and EV_ONESHOT sources are no longer armed after delivery
+ if (ds->ds_needs_rearm) {
+ (void)dispatch_atomic_and2o(ds, ds_atomic_flags, ~DSF_ARMED);
+ }
+
+ _dispatch_wakeup(ds);
}
void
-_dispatch_source_cancel_callout(dispatch_source_t ds)
+_dispatch_source_drain_kevent(struct kevent *ke)
{
- ds->ds_pending_data_mask = 0;
- ds->ds_pending_data = 0;
- ds->ds_data = 0;
+ dispatch_kevent_t dk = ke->udata;
+ dispatch_source_refs_t dri;
-#ifdef __BLOCKS__
- if (ds->ds_handler_is_block) {
- Block_release(ds->ds_handler_ctxt);
- ds->ds_handler_is_block = false;
- ds->ds_handler_func = NULL;
- ds->ds_handler_ctxt = NULL;
- }
+#if DISPATCH_DEBUG
+ static dispatch_once_t pred;
+ dispatch_once_f(&pred, NULL, _dispatch_kevent_debugger);
#endif
- if (!ds->ds_cancel_handler) {
- return;
+ dispatch_debug_kevents(ke, 1, __func__);
+
+#if HAVE_MACH
+ if (ke->filter == EVFILT_MACHPORT) {
+ return _dispatch_drain_mach_messages(ke);
}
- if (ds->ds_cancel_is_block) {
-#ifdef __BLOCKS__
- dispatch_block_t b = ds->ds_cancel_handler;
- if (ds->ds_atomic_flags & DSF_CANCELED) {
- b();
- }
- Block_release(ds->ds_cancel_handler);
- ds->ds_cancel_is_block = false;
#endif
- } else {
- dispatch_function_t f = ds->ds_cancel_handler;
- if (ds->ds_atomic_flags & DSF_CANCELED) {
- f(ds->do_ctxt);
- }
+ dispatch_assert(dk);
+
+ if (ke->flags & EV_ONESHOT) {
+ dk->dk_kevent.flags |= EV_ONESHOT;
}
- ds->ds_cancel_handler = NULL;
+
+ TAILQ_FOREACH(dri, &dk->dk_sources, dr_list) {
+ _dispatch_source_merge_kevent(_dispatch_source_from_refs(dri), ke);
+ }
}
-size_t
-dispatch_source_debug_attr(dispatch_source_t ds, char* buf, size_t bufsiz)
-{
- dispatch_queue_t target = ds->do_targetq;
- return snprintf(buf, bufsiz,
- "target = %s[%p], pending_data = 0x%lx, pending_data_mask = 0x%lx, ",
- target ? target->dq_label : "", target,
- ds->ds_pending_data, ds->ds_pending_data_mask);
-}
+#pragma mark -
+#pragma mark dispatch_kevent_t
-size_t
-_dispatch_source_debug(dispatch_source_t ds, char* buf, size_t bufsiz)
-{
- size_t offset = 0;
- offset += snprintf(&buf[offset], bufsiz - offset, "%s[%p] = { ", dx_kind(ds), ds);
- offset += dispatch_object_debug_attr(ds, &buf[offset], bufsiz - offset);
- offset += dispatch_source_debug_attr(ds, &buf[offset], bufsiz - offset);
- return offset;
-}
-
-#ifndef DISPATCH_NO_LEGACY
-static void
-dispatch_source_attr_dispose(dispatch_source_attr_t attr)
-{
- // release the finalizer block if necessary
- dispatch_source_attr_set_finalizer(attr, NULL);
- _dispatch_dispose(attr);
-}
-
-static const struct dispatch_source_attr_vtable_s dispatch_source_attr_vtable = {
- .do_type = DISPATCH_SOURCE_ATTR_TYPE,
- .do_kind = "source-attr",
- .do_dispose = dispatch_source_attr_dispose,
+static struct dispatch_kevent_s _dispatch_kevent_data_or = {
+ .dk_kevent = {
+ .filter = DISPATCH_EVFILT_CUSTOM_OR,
+ .flags = EV_CLEAR,
+ .udata = &_dispatch_kevent_data_or,
+ },
+ .dk_sources = TAILQ_HEAD_INITIALIZER(_dispatch_kevent_data_or.dk_sources),
+};
+static struct dispatch_kevent_s _dispatch_kevent_data_add = {
+ .dk_kevent = {
+ .filter = DISPATCH_EVFILT_CUSTOM_ADD,
+ .udata = &_dispatch_kevent_data_add,
+ },
+ .dk_sources = TAILQ_HEAD_INITIALIZER(_dispatch_kevent_data_add.dk_sources),
};
-dispatch_source_attr_t
-dispatch_source_attr_create(void)
-{
- dispatch_source_attr_t rval = calloc(1, sizeof(struct dispatch_source_attr_s));
+#if TARGET_OS_EMBEDDED
+#define DSL_HASH_SIZE 64u // must be a power of two
+#else
+#define DSL_HASH_SIZE 256u // must be a power of two
+#endif
+#define DSL_HASH(x) ((x) & (DSL_HASH_SIZE - 1))
- if (rval) {
- rval->do_vtable = &dispatch_source_attr_vtable;
- rval->do_next = DISPATCH_OBJECT_LISTLESS;
- rval->do_targetq = dispatch_get_global_queue(0, 0);
- rval->do_ref_cnt = 1;
- rval->do_xref_cnt = 1;
+DISPATCH_CACHELINE_ALIGN
+static TAILQ_HEAD(, dispatch_kevent_s) _dispatch_sources[DSL_HASH_SIZE];
+
+static dispatch_once_t __dispatch_kevent_init_pred;
+
+static void
+_dispatch_kevent_init(void *context DISPATCH_UNUSED)
+{
+ unsigned int i;
+ for (i = 0; i < DSL_HASH_SIZE; i++) {
+ TAILQ_INIT(&_dispatch_sources[i]);
}
- return rval;
+ TAILQ_INSERT_TAIL(&_dispatch_sources[0],
+ &_dispatch_kevent_data_or, dk_list);
+ TAILQ_INSERT_TAIL(&_dispatch_sources[0],
+ &_dispatch_kevent_data_add, dk_list);
+
+ _dispatch_source_timer_init();
+}
+
+static inline uintptr_t
+_dispatch_kevent_hash(uintptr_t ident, short filter)
+{
+ uintptr_t value;
+#if HAVE_MACH
+ value = (filter == EVFILT_MACHPORT ? MACH_PORT_INDEX(ident) : ident);
+#else
+ value = ident;
+#endif
+ return DSL_HASH(value);
+}
+
+static dispatch_kevent_t
+_dispatch_kevent_find(uintptr_t ident, short filter)
+{
+ uintptr_t hash = _dispatch_kevent_hash(ident, filter);
+ dispatch_kevent_t dki;
+
+ TAILQ_FOREACH(dki, &_dispatch_sources[hash], dk_list) {
+ if (dki->dk_kevent.ident == ident && dki->dk_kevent.filter == filter) {
+ break;
+ }
+ }
+ return dki;
+}
+
+static void
+_dispatch_kevent_insert(dispatch_kevent_t dk)
+{
+ uintptr_t hash = _dispatch_kevent_hash(dk->dk_kevent.ident,
+ dk->dk_kevent.filter);
+
+ TAILQ_INSERT_TAIL(&_dispatch_sources[hash], dk, dk_list);
+}
+
+// Find existing kevents, and merge any new flags if necessary
+static void
+_dispatch_kevent_register(dispatch_source_t ds)
+{
+ dispatch_kevent_t dk;
+ typeof(dk->dk_kevent.fflags) new_flags;
+ bool do_resume = false;
+
+ if (ds->ds_is_installed) {
+ return;
+ }
+ ds->ds_is_installed = true;
+
+ dispatch_once_f(&__dispatch_kevent_init_pred,
+ NULL, _dispatch_kevent_init);
+
+ dk = _dispatch_kevent_find(ds->ds_dkev->dk_kevent.ident,
+ ds->ds_dkev->dk_kevent.filter);
+
+ if (dk) {
+ // If an existing dispatch kevent is found, check to see if new flags
+ // need to be added to the existing kevent
+ new_flags = ~dk->dk_kevent.fflags & ds->ds_dkev->dk_kevent.fflags;
+ dk->dk_kevent.fflags |= ds->ds_dkev->dk_kevent.fflags;
+ free(ds->ds_dkev);
+ ds->ds_dkev = dk;
+ do_resume = new_flags;
+ } else {
+ dk = ds->ds_dkev;
+ _dispatch_kevent_insert(dk);
+ new_flags = dk->dk_kevent.fflags;
+ do_resume = true;
+ }
+
+ TAILQ_INSERT_TAIL(&dk->dk_sources, ds->ds_refs, dr_list);
+
+ // Re-register the kevent with the kernel if new flags were added
+ // by the dispatch kevent
+ if (do_resume) {
+ dk->dk_kevent.flags |= EV_ADD;
+ }
+ if (do_resume || ds->ds_needs_rearm) {
+ _dispatch_source_kevent_resume(ds, new_flags);
+ }
+ (void)dispatch_atomic_or2o(ds, ds_atomic_flags, DSF_ARMED);
+}
+
+static bool
+_dispatch_kevent_resume(dispatch_kevent_t dk, uint32_t new_flags,
+ uint32_t del_flags)
+{
+ long r;
+ switch (dk->dk_kevent.filter) {
+ case DISPATCH_EVFILT_TIMER:
+ case DISPATCH_EVFILT_CUSTOM_ADD:
+ case DISPATCH_EVFILT_CUSTOM_OR:
+ // these types not registered with kevent
+ return 0;
+#if HAVE_MACH
+ case EVFILT_MACHPORT:
+ return _dispatch_kevent_machport_resume(dk, new_flags, del_flags);
+#endif
+ case EVFILT_PROC:
+ if (dk->dk_kevent.flags & EV_ONESHOT) {
+ return 0;
+ }
+ // fall through
+ default:
+ r = _dispatch_update_kq(&dk->dk_kevent);
+ if (dk->dk_kevent.flags & EV_DISPATCH) {
+ dk->dk_kevent.flags &= ~EV_ADD;
+ }
+ return r;
+ }
+}
+
+static void
+_dispatch_kevent_dispose(dispatch_kevent_t dk)
+{
+ uintptr_t hash;
+
+ switch (dk->dk_kevent.filter) {
+ case DISPATCH_EVFILT_TIMER:
+ case DISPATCH_EVFILT_CUSTOM_ADD:
+ case DISPATCH_EVFILT_CUSTOM_OR:
+ // these sources live on statically allocated lists
+ return;
+#if HAVE_MACH
+ case EVFILT_MACHPORT:
+ _dispatch_kevent_machport_resume(dk, 0, dk->dk_kevent.fflags);
+ break;
+#endif
+ case EVFILT_PROC:
+ if (dk->dk_kevent.flags & EV_ONESHOT) {
+ break; // implicitly deleted
+ }
+ // fall through
+ default:
+ if (~dk->dk_kevent.flags & EV_DELETE) {
+ dk->dk_kevent.flags |= EV_DELETE;
+ _dispatch_update_kq(&dk->dk_kevent);
+ }
+ break;
+ }
+
+ hash = _dispatch_kevent_hash(dk->dk_kevent.ident,
+ dk->dk_kevent.filter);
+ TAILQ_REMOVE(&_dispatch_sources[hash], dk, dk_list);
+ free(dk);
+}
+
+static void
+_dispatch_kevent_unregister(dispatch_source_t ds)
+{
+ dispatch_kevent_t dk = ds->ds_dkev;
+ dispatch_source_refs_t dri;
+ uint32_t del_flags, fflags = 0;
+
+ ds->ds_dkev = NULL;
+
+ TAILQ_REMOVE(&dk->dk_sources, ds->ds_refs, dr_list);
+
+ if (TAILQ_EMPTY(&dk->dk_sources)) {
+ _dispatch_kevent_dispose(dk);
+ } else {
+ TAILQ_FOREACH(dri, &dk->dk_sources, dr_list) {
+ dispatch_source_t dsi = _dispatch_source_from_refs(dri);
+ fflags |= (uint32_t)dsi->ds_pending_data_mask;
+ }
+ del_flags = (uint32_t)ds->ds_pending_data_mask & ~fflags;
+ if (del_flags) {
+ dk->dk_kevent.flags |= EV_ADD;
+ dk->dk_kevent.fflags = fflags;
+ _dispatch_kevent_resume(dk, 0, del_flags);
+ }
+ }
+
+ (void)dispatch_atomic_and2o(ds, ds_atomic_flags, ~DSF_ARMED);
+ ds->ds_needs_rearm = false; // re-arm is pointless and bad now
+ _dispatch_release(ds); // the retain is done at creation time
+}
+
+#pragma mark -
+#pragma mark dispatch_timer
+
+DISPATCH_CACHELINE_ALIGN
+static struct dispatch_kevent_s _dispatch_kevent_timer[] = {
+ [DISPATCH_TIMER_INDEX_WALL] = {
+ .dk_kevent = {
+ .ident = DISPATCH_TIMER_INDEX_WALL,
+ .filter = DISPATCH_EVFILT_TIMER,
+ .udata = &_dispatch_kevent_timer[DISPATCH_TIMER_INDEX_WALL],
+ },
+ .dk_sources = TAILQ_HEAD_INITIALIZER(
+ _dispatch_kevent_timer[DISPATCH_TIMER_INDEX_WALL].dk_sources),
+ },
+ [DISPATCH_TIMER_INDEX_MACH] = {
+ .dk_kevent = {
+ .ident = DISPATCH_TIMER_INDEX_MACH,
+ .filter = DISPATCH_EVFILT_TIMER,
+ .udata = &_dispatch_kevent_timer[DISPATCH_TIMER_INDEX_MACH],
+ },
+ .dk_sources = TAILQ_HEAD_INITIALIZER(
+ _dispatch_kevent_timer[DISPATCH_TIMER_INDEX_MACH].dk_sources),
+ },
+ [DISPATCH_TIMER_INDEX_DISARM] = {
+ .dk_kevent = {
+ .ident = DISPATCH_TIMER_INDEX_DISARM,
+ .filter = DISPATCH_EVFILT_TIMER,
+ .udata = &_dispatch_kevent_timer[DISPATCH_TIMER_INDEX_DISARM],
+ },
+ .dk_sources = TAILQ_HEAD_INITIALIZER(
+ _dispatch_kevent_timer[DISPATCH_TIMER_INDEX_DISARM].dk_sources),
+ },
+};
+// Don't count disarmed timer list
+#define DISPATCH_TIMER_COUNT ((sizeof(_dispatch_kevent_timer) \
+ / sizeof(_dispatch_kevent_timer[0])) - 1)
+
+static inline void
+_dispatch_source_timer_init(void)
+{
+ TAILQ_INSERT_TAIL(&_dispatch_sources[DSL_HASH(DISPATCH_TIMER_INDEX_WALL)],
+ &_dispatch_kevent_timer[DISPATCH_TIMER_INDEX_WALL], dk_list);
+ TAILQ_INSERT_TAIL(&_dispatch_sources[DSL_HASH(DISPATCH_TIMER_INDEX_MACH)],
+ &_dispatch_kevent_timer[DISPATCH_TIMER_INDEX_MACH], dk_list);
+ TAILQ_INSERT_TAIL(&_dispatch_sources[DSL_HASH(DISPATCH_TIMER_INDEX_DISARM)],
+ &_dispatch_kevent_timer[DISPATCH_TIMER_INDEX_DISARM], dk_list);
+}
+
+DISPATCH_ALWAYS_INLINE
+static inline unsigned int
+_dispatch_source_timer_idx(dispatch_source_refs_t dr)
+{
+ return ds_timer(dr).flags & DISPATCH_TIMER_WALL_CLOCK ?
+ DISPATCH_TIMER_INDEX_WALL : DISPATCH_TIMER_INDEX_MACH;
+}
+
+DISPATCH_ALWAYS_INLINE
+static inline uint64_t
+_dispatch_source_timer_now2(unsigned int timer)
+{
+ switch (timer) {
+ case DISPATCH_TIMER_INDEX_MACH:
+ return _dispatch_absolute_time();
+ case DISPATCH_TIMER_INDEX_WALL:
+ return _dispatch_get_nanoseconds();
+ default:
+ DISPATCH_CRASH("Invalid timer");
+ }
+}
+
+DISPATCH_ALWAYS_INLINE
+static inline uint64_t
+_dispatch_source_timer_now(dispatch_source_refs_t dr)
+{
+ return _dispatch_source_timer_now2(_dispatch_source_timer_idx(dr));
+}
+
+// Updates the ordered list of timers based on next fire date for changes to ds.
+// Should only be called from the context of _dispatch_mgr_q.
+static void
+_dispatch_timer_list_update(dispatch_source_t ds)
+{
+ dispatch_source_refs_t dr = ds->ds_refs, dri = NULL;
+
+ dispatch_assert(_dispatch_queue_get_current() == &_dispatch_mgr_q);
+
+ // do not reschedule timers unregistered with _dispatch_kevent_unregister()
+ if (!ds->ds_dkev) {
+ return;
+ }
+
+ // Ensure the source is on the global kevent lists before it is removed and
+ // readded below.
+ _dispatch_kevent_register(ds);
+
+ TAILQ_REMOVE(&ds->ds_dkev->dk_sources, dr, dr_list);
+
+ // Move timers that are disabled, suspended or have missed intervals to the
+ // disarmed list, rearm after resume resp. source invoke will reenable them
+ if (!ds_timer(dr).target || DISPATCH_OBJECT_SUSPENDED(ds) ||
+ ds->ds_pending_data) {
+ (void)dispatch_atomic_and2o(ds, ds_atomic_flags, ~DSF_ARMED);
+ ds->ds_dkev = &_dispatch_kevent_timer[DISPATCH_TIMER_INDEX_DISARM];
+ TAILQ_INSERT_TAIL(&ds->ds_dkev->dk_sources, (dispatch_source_refs_t)dr,
+ dr_list);
+ return;
+ }
+
+ // change the list if the clock type has changed
+ ds->ds_dkev = &_dispatch_kevent_timer[_dispatch_source_timer_idx(dr)];
+
+ TAILQ_FOREACH(dri, &ds->ds_dkev->dk_sources, dr_list) {
+ if (ds_timer(dri).target == 0 ||
+ ds_timer(dr).target < ds_timer(dri).target) {
+ break;
+ }
+ }
+
+ if (dri) {
+ TAILQ_INSERT_BEFORE(dri, dr, dr_list);
+ } else {
+ TAILQ_INSERT_TAIL(&ds->ds_dkev->dk_sources, dr, dr_list);
+ }
+}
+
+static inline void
+_dispatch_run_timers2(unsigned int timer)
+{
+ dispatch_source_refs_t dr;
+ dispatch_source_t ds;
+ uint64_t now, missed;
+
+ now = _dispatch_source_timer_now2(timer);
+ while ((dr = TAILQ_FIRST(&_dispatch_kevent_timer[timer].dk_sources))) {
+ ds = _dispatch_source_from_refs(dr);
+ // We may find timers on the wrong list due to a pending update from
+ // dispatch_source_set_timer. Force an update of the list in that case.
+ if (timer != ds->ds_ident_hack) {
+ _dispatch_timer_list_update(ds);
+ continue;
+ }
+ if (!ds_timer(dr).target) {
+ // no configured timers on the list
+ break;
+ }
+ if (ds_timer(dr).target > now) {
+ // Done running timers for now.
+ break;
+ }
+ // Remove timers that are suspended or have missed intervals from the
+ // list, rearm after resume resp. source invoke will reenable them
+ if (DISPATCH_OBJECT_SUSPENDED(ds) || ds->ds_pending_data) {
+ _dispatch_timer_list_update(ds);
+ continue;
+ }
+ // Calculate number of missed intervals.
+ missed = (now - ds_timer(dr).target) / ds_timer(dr).interval;
+ if (++missed > INT_MAX) {
+ missed = INT_MAX;
+ }
+ ds_timer(dr).target += missed * ds_timer(dr).interval;
+ _dispatch_timer_list_update(ds);
+ ds_timer(dr).last_fire = now;
+ (void)dispatch_atomic_add2o(ds, ds_pending_data, (int)missed);
+ _dispatch_wakeup(ds);
+ }
}
void
-dispatch_source_attr_set_finalizer_f(dispatch_source_attr_t attr,
- void *context, dispatch_source_finalizer_function_t finalizer)
+_dispatch_run_timers(void)
{
-#ifdef __BLOCKS__
- if (attr->finalizer_func == (void*)_dispatch_call_block_and_release2) {
- Block_release(attr->finalizer_ctxt);
- }
-#endif
+ dispatch_once_f(&__dispatch_kevent_init_pred,
+ NULL, _dispatch_kevent_init);
- attr->finalizer_ctxt = context;
- attr->finalizer_func = finalizer;
-}
-
-#ifdef __BLOCKS__
-long
-dispatch_source_attr_set_finalizer(dispatch_source_attr_t attr,
- dispatch_source_finalizer_t finalizer)
-{
- void *ctxt;
- dispatch_source_finalizer_function_t func;
-
- if (finalizer) {
- if (!(ctxt = Block_copy(finalizer))) {
- return 1;
+ unsigned int i;
+ for (i = 0; i < DISPATCH_TIMER_COUNT; i++) {
+ if (!TAILQ_EMPTY(&_dispatch_kevent_timer[i].dk_sources)) {
+ _dispatch_run_timers2(i);
}
- func = (void *)_dispatch_call_block_and_release2;
- } else {
- ctxt = NULL;
- func = NULL;
}
-
- dispatch_source_attr_set_finalizer_f(attr, ctxt, func);
-
- return 0;
}
-dispatch_source_finalizer_t
-dispatch_source_attr_get_finalizer(dispatch_source_attr_t attr)
+static inline unsigned long
+_dispatch_source_timer_data(dispatch_source_refs_t dr, unsigned long prev)
{
- if (attr->finalizer_func == (void*)_dispatch_call_block_and_release2) {
- return (dispatch_source_finalizer_t)attr->finalizer_ctxt;
- } else if (attr->finalizer_func == NULL) {
+ // calculate the number of intervals since last fire
+ unsigned long data, missed;
+ uint64_t now = _dispatch_source_timer_now(dr);
+ missed = (unsigned long)((now - ds_timer(dr).last_fire) /
+ ds_timer(dr).interval);
+ // correct for missed intervals already delivered last time
+ data = prev - ds_timer(dr).missed + missed;
+ ds_timer(dr).missed = missed;
+ return data;
+}
+
+// approx 1 year (60s * 60m * 24h * 365d)
+#define FOREVER_NSEC 31536000000000000ull
+
+struct timespec *
+_dispatch_get_next_timer_fire(struct timespec *howsoon)
+{
+ // <rdar://problem/6459649>
+ // kevent(2) does not allow large timeouts, so we use a long timeout
+ // instead (approximately 1 year).
+ dispatch_source_refs_t dr = NULL;
+ unsigned int timer;
+ uint64_t now, delta_tmp, delta = UINT64_MAX;
+
+ for (timer = 0; timer < DISPATCH_TIMER_COUNT; timer++) {
+ // Timers are kept in order, first one will fire next
+ dr = TAILQ_FIRST(&_dispatch_kevent_timer[timer].dk_sources);
+ if (!dr || !ds_timer(dr).target) {
+ // Empty list or disabled timer
+ continue;
+ }
+ now = _dispatch_source_timer_now(dr);
+ if (ds_timer(dr).target <= now) {
+ howsoon->tv_sec = 0;
+ howsoon->tv_nsec = 0;
+ return howsoon;
+ }
+ // the subtraction cannot go negative because the previous "if"
+ // verified that the target is greater than now.
+ delta_tmp = ds_timer(dr).target - now;
+ if (!(ds_timer(dr).flags & DISPATCH_TIMER_WALL_CLOCK)) {
+ delta_tmp = _dispatch_time_mach2nano(delta_tmp);
+ }
+ if (delta_tmp < delta) {
+ delta = delta_tmp;
+ }
+ }
+ if (slowpath(delta > FOREVER_NSEC)) {
return NULL;
} else {
- abort(); // finalizer is not a block...
+ howsoon->tv_sec = (time_t)(delta / NSEC_PER_SEC);
+ howsoon->tv_nsec = (long)(delta % NSEC_PER_SEC);
}
-}
-#endif
-
-void
-dispatch_source_attr_set_context(dispatch_source_attr_t attr, void *context)
-{
- attr->context = context;
+ return howsoon;
}
-dispatch_source_attr_t
-dispatch_source_attr_copy(dispatch_source_attr_t proto)
-{
- dispatch_source_attr_t rval = NULL;
-
- if (proto && (rval = malloc(sizeof(struct dispatch_source_attr_s)))) {
- memcpy(rval, proto, sizeof(struct dispatch_source_attr_s));
-#ifdef __BLOCKS__
- if (rval->finalizer_func == (void*)_dispatch_call_block_and_release2) {
- rval->finalizer_ctxt = Block_copy(rval->finalizer_ctxt);
- }
-#endif
- } else if (!proto) {
- rval = dispatch_source_attr_create();
- }
- return rval;
-}
-#endif /* DISPATCH_NO_LEGACY */
-
-
-dispatch_source_t
-dispatch_source_create(dispatch_source_type_t type,
- uintptr_t handle,
- unsigned long mask,
- dispatch_queue_t q)
-{
- dispatch_source_t ds = NULL;
- static char source_label[sizeof(ds->dq_label)] = "source";
-
- // input validation
- if (type == NULL || (mask & ~type->mask)) {
- goto out_bad;
- }
-
- ds = calloc(1ul, sizeof(struct dispatch_source_s));
- if (slowpath(!ds)) {
- goto out_bad;
- }
-
- // Initialize as a queue first, then override some settings below.
- _dispatch_queue_init((dispatch_queue_t)ds);
- memcpy(ds->dq_label, source_label, sizeof(source_label));
-
- // Dispatch Object
- ds->do_vtable = &_dispatch_source_kevent_vtable;
- ds->do_ref_cnt++; // the reference the manger queue holds
- ds->do_suspend_cnt = DISPATCH_OBJECT_SUSPEND_INTERVAL;
- // do_targetq will be retained below, past point of no-return
- ds->do_targetq = q;
-
- if (slowpath(!type->init(ds, type, handle, mask, q))) {
- goto out_bad;
- }
-
- dispatch_assert(!(ds->ds_is_level && ds->ds_is_adder));
-#if DISPATCH_DEBUG
- dispatch_debug(ds, __FUNCTION__);
-#endif
-
- _dispatch_retain(ds->do_targetq);
- return ds;
-
-out_bad:
- free(ds);
- return NULL;
-}
-
-#ifdef __BLOCKS__
-// 6618342 Contact the team that owns the Instrument DTrace probe before renaming this symbol
-static void
-_dispatch_source_set_event_handler2(void *context)
-{
- struct Block_layout *bl = context;
-
- dispatch_source_t ds = (dispatch_source_t)_dispatch_queue_get_current();
- dispatch_assert(ds->do_vtable == &_dispatch_source_kevent_vtable);
-
- if (ds->ds_handler_is_block && ds->ds_handler_ctxt) {
- Block_release(ds->ds_handler_ctxt);
- }
- ds->ds_handler_func = bl ? (void *)bl->invoke : NULL;
- ds->ds_handler_ctxt = bl;
- ds->ds_handler_is_block = true;
-}
-
-void
-dispatch_source_set_event_handler(dispatch_source_t ds, dispatch_block_t handler)
-{
- dispatch_assert(!ds->ds_is_legacy);
- handler = _dispatch_Block_copy(handler);
- dispatch_barrier_async_f((dispatch_queue_t)ds,
- handler, _dispatch_source_set_event_handler2);
-}
-#endif /* __BLOCKS__ */
+struct dispatch_set_timer_params {
+ dispatch_source_t ds;
+ uintptr_t ident;
+ struct dispatch_timer_source_s values;
+};
static void
-_dispatch_source_set_event_handler_f(void *context)
+_dispatch_source_set_timer3(void *context)
{
- dispatch_source_t ds = (dispatch_source_t)_dispatch_queue_get_current();
- dispatch_assert(ds->do_vtable == &_dispatch_source_kevent_vtable);
-
-#ifdef __BLOCKS__
- if (ds->ds_handler_is_block && ds->ds_handler_ctxt) {
- Block_release(ds->ds_handler_ctxt);
- }
-#endif
- ds->ds_handler_func = context;
- ds->ds_handler_ctxt = ds->do_ctxt;
- ds->ds_handler_is_block = false;
-}
-
-void
-dispatch_source_set_event_handler_f(dispatch_source_t ds,
- dispatch_function_t handler)
-{
- dispatch_assert(!ds->ds_is_legacy);
- dispatch_barrier_async_f((dispatch_queue_t)ds,
- handler, _dispatch_source_set_event_handler_f);
-}
-
-#ifdef __BLOCKS__
-// 6618342 Contact the team that owns the Instrument DTrace probe before renaming this symbol
-static void
-_dispatch_source_set_cancel_handler2(void *context)
-{
- dispatch_source_t ds = (dispatch_source_t)_dispatch_queue_get_current();
- dispatch_assert(ds->do_vtable == &_dispatch_source_kevent_vtable);
-
- if (ds->ds_cancel_is_block && ds->ds_cancel_handler) {
- Block_release(ds->ds_cancel_handler);
- }
- ds->ds_cancel_handler = context;
- ds->ds_cancel_is_block = true;
-}
-
-void
-dispatch_source_set_cancel_handler(dispatch_source_t ds,
- dispatch_block_t handler)
-{
- dispatch_assert(!ds->ds_is_legacy);
- handler = _dispatch_Block_copy(handler);
- dispatch_barrier_async_f((dispatch_queue_t)ds,
- handler, _dispatch_source_set_cancel_handler2);
-}
-#endif /* __BLOCKS__ */
-
-static void
-_dispatch_source_set_cancel_handler_f(void *context)
-{
- dispatch_source_t ds = (dispatch_source_t)_dispatch_queue_get_current();
- dispatch_assert(ds->do_vtable == &_dispatch_source_kevent_vtable);
-
-#ifdef __BLOCKS__
- if (ds->ds_cancel_is_block && ds->ds_cancel_handler) {
- Block_release(ds->ds_cancel_handler);
- }
-#endif
- ds->ds_cancel_handler = context;
- ds->ds_cancel_is_block = false;
-}
-
-void
-dispatch_source_set_cancel_handler_f(dispatch_source_t ds,
- dispatch_function_t handler)
-{
- dispatch_assert(!ds->ds_is_legacy);
- dispatch_barrier_async_f((dispatch_queue_t)ds,
- handler, _dispatch_source_set_cancel_handler_f);
-}
-
-#ifndef DISPATCH_NO_LEGACY
-// 6618342 Contact the team that owns the Instrument DTrace probe before renaming this symbol
-dispatch_source_t
-_dispatch_source_create2(dispatch_source_t ds,
- dispatch_source_attr_t attr,
- void *context,
- dispatch_source_handler_function_t handler)
-{
- if (ds == NULL || handler == NULL) {
- return NULL;
- }
-
- ds->ds_is_legacy = true;
-
- ds->ds_handler_func = (dispatch_function_t)handler;
- ds->ds_handler_ctxt = context;
-
- if (attr && attr != DISPATCH_SOURCE_CREATE_SUSPENDED) {
- ds->dq_finalizer_ctxt = attr->finalizer_ctxt;
- ds->dq_finalizer_func = (typeof(ds->dq_finalizer_func))attr->finalizer_func;
- ds->do_ctxt = attr->context;
- }
-#ifdef __BLOCKS__
- if (ds->dq_finalizer_func == (void*)_dispatch_call_block_and_release2) {
- ds->dq_finalizer_ctxt = Block_copy(ds->dq_finalizer_ctxt);
- if (!ds->dq_finalizer_ctxt) {
- goto out_bad;
- }
- }
- if (handler == _dispatch_source_call_block) {
- struct Block_layout *bl = ds->ds_handler_ctxt = Block_copy(context);
- if (!ds->ds_handler_ctxt) {
- if (ds->dq_finalizer_func == (void*)_dispatch_call_block_and_release2) {
- Block_release(ds->dq_finalizer_ctxt);
- }
- goto out_bad;
- }
- ds->ds_handler_func = (void *)bl->invoke;
- ds->ds_handler_is_block = true;
- }
-
- // all legacy sources get a cancellation event on the normal event handler.
- dispatch_function_t func = ds->ds_handler_func;
- dispatch_source_handler_t block = ds->ds_handler_ctxt;
- void *ctxt = ds->ds_handler_ctxt;
- bool handler_is_block = ds->ds_handler_is_block;
-
- ds->ds_cancel_is_block = true;
- if (handler_is_block) {
- ds->ds_cancel_handler = _dispatch_Block_copy(^{
- block(ds);
- });
- } else {
- ds->ds_cancel_handler = _dispatch_Block_copy(^{
- ((dispatch_source_handler_function_t)func)(ctxt, ds);
- });
- }
-#endif
- if (attr != DISPATCH_SOURCE_CREATE_SUSPENDED) {
- dispatch_resume(ds);
- }
-
- return ds;
-
-#ifdef __BLOCKS__
-out_bad:
- free(ds);
- return NULL;
-#endif
-}
-
-long
-dispatch_source_get_error(dispatch_source_t ds, long *err_out)
-{
- // 6863892 don't report ECANCELED until kevent is unregistered
- if ((ds->ds_atomic_flags & DSF_CANCELED) && !ds->ds_dkev) {
- if (err_out) {
- *err_out = ECANCELED;
- }
- return DISPATCH_ERROR_DOMAIN_POSIX;
- } else {
- return DISPATCH_ERROR_DOMAIN_NO_ERROR;
- }
-}
-#endif /* DISPATCH_NO_LEGACY */
-
-// To be called from the context of the _dispatch_mgr_q
-static void
-_dispatch_source_set_timer2(void *context)
-{
+ // Called on the _dispatch_mgr_q
struct dispatch_set_timer_params *params = context;
dispatch_source_t ds = params->ds;
ds->ds_ident_hack = params->ident;
- ds->ds_timer = params->values;
+ ds_timer(ds->ds_refs) = params->values;
+ // Clear any pending data that might have accumulated on
+ // older timer params <rdar://problem/8574886>
+ ds->ds_pending_data = 0;
_dispatch_timer_list_update(ds);
dispatch_resume(ds);
dispatch_release(ds);
free(params);
}
+static void
+_dispatch_source_set_timer2(void *context)
+{
+ // Called on the source queue
+ struct dispatch_set_timer_params *params = context;
+ dispatch_suspend(params->ds);
+ dispatch_barrier_async_f(&_dispatch_mgr_q, params,
+ _dispatch_source_set_timer3);
+}
+
void
dispatch_source_set_timer(dispatch_source_t ds,
dispatch_time_t start,
uint64_t interval,
uint64_t leeway)
{
+ if (slowpath(!ds->ds_is_timer)) {
+ DISPATCH_CLIENT_CRASH("Attempt to set timer on a non-timer source");
+ }
+
struct dispatch_set_timer_params *params;
-
+
// we use zero internally to mean disabled
if (interval == 0) {
interval = 1;
@@ -650,32 +1219,26 @@
// 6866347 - make sure nanoseconds won't overflow
interval = INT64_MAX;
}
+ if ((int64_t)leeway < 0) {
+ leeway = INT64_MAX;
+ }
- // Suspend the source so that it doesn't fire with pending changes
- // The use of suspend/resume requires the external retain/release
- dispatch_retain(ds);
- dispatch_suspend(ds);
-
if (start == DISPATCH_TIME_NOW) {
start = _dispatch_absolute_time();
} else if (start == DISPATCH_TIME_FOREVER) {
start = INT64_MAX;
}
- if ((int64_t)leeway < 0) {
- leeway = INT64_MAX;
- }
- while (!(params = malloc(sizeof(struct dispatch_set_timer_params)))) {
+ while (!(params = calloc(1ul, sizeof(struct dispatch_set_timer_params)))) {
sleep(1);
}
params->ds = ds;
- params->values.flags = ds->ds_timer.flags;
+ params->values.flags = ds_timer(ds->ds_refs).flags;
if ((int64_t)start < 0) {
// wall clock
params->ident = DISPATCH_TIMER_INDEX_WALL;
- params->values.start = -((int64_t)start);
params->values.target = -((int64_t)start);
params->values.interval = interval;
params->values.leeway = leeway;
@@ -683,50 +1246,868 @@
} else {
// absolute clock
params->ident = DISPATCH_TIMER_INDEX_MACH;
- params->values.start = start;
params->values.target = start;
params->values.interval = _dispatch_time_nano2mach(interval);
+
+ // rdar://problem/7287561 interval must be at least one in
+ // in order to avoid later division by zero when calculating
+ // the missed interval count. (NOTE: the wall clock's
+ // interval is already "fixed" to be 1 or more)
+ if (params->values.interval < 1) {
+ params->values.interval = 1;
+ }
+
params->values.leeway = _dispatch_time_nano2mach(leeway);
params->values.flags &= ~DISPATCH_TIMER_WALL_CLOCK;
}
-
- dispatch_barrier_async_f(&_dispatch_mgr_q, params, _dispatch_source_set_timer2);
+ // Suspend the source so that it doesn't fire with pending changes
+ // The use of suspend/resume requires the external retain/release
+ dispatch_retain(ds);
+ dispatch_barrier_async_f((dispatch_queue_t)ds, params,
+ _dispatch_source_set_timer2);
}
-#ifndef DISPATCH_NO_LEGACY
-// LEGACY
-long
-dispatch_source_timer_set_time(dispatch_source_t ds, uint64_t nanoseconds, uint64_t leeway)
+#pragma mark -
+#pragma mark dispatch_mach
+
+#if HAVE_MACH
+
+#if DISPATCH_DEBUG && DISPATCH_MACHPORT_DEBUG
+#define _dispatch_debug_machport(name) \
+ dispatch_debug_machport((name), __func__)
+#else
+#define _dispatch_debug_machport(name)
+#endif
+
+// Flags for all notifications that are registered/unregistered when a
+// send-possible notification is requested/delivered
+#define _DISPATCH_MACH_SP_FLAGS (DISPATCH_MACH_SEND_POSSIBLE| \
+ DISPATCH_MACH_SEND_DEAD|DISPATCH_MACH_SEND_DELETED)
+
+#define _DISPATCH_IS_POWER_OF_TWO(v) (!(v & (v - 1)) && v)
+#define _DISPATCH_HASH(x, y) (_DISPATCH_IS_POWER_OF_TWO(y) ? \
+ (MACH_PORT_INDEX(x) & ((y) - 1)) : (MACH_PORT_INDEX(x) % (y)))
+
+#define _DISPATCH_MACHPORT_HASH_SIZE 32
+#define _DISPATCH_MACHPORT_HASH(x) \
+ _DISPATCH_HASH((x), _DISPATCH_MACHPORT_HASH_SIZE)
+
+static dispatch_source_t _dispatch_mach_notify_source;
+static mach_port_t _dispatch_port_set;
+static mach_port_t _dispatch_event_port;
+
+static kern_return_t _dispatch_mach_notify_update(dispatch_kevent_t dk,
+ uint32_t new_flags, uint32_t del_flags, uint32_t mask,
+ mach_msg_id_t notify_msgid, mach_port_mscount_t notify_sync);
+
+static void
+_dispatch_port_set_init(void *context DISPATCH_UNUSED)
{
- dispatch_time_t start;
- if (nanoseconds == 0) {
- nanoseconds = 1;
+ struct kevent kev = {
+ .filter = EVFILT_MACHPORT,
+ .flags = EV_ADD,
+ };
+ kern_return_t kr;
+
+ kr = mach_port_allocate(mach_task_self(), MACH_PORT_RIGHT_PORT_SET,
+ &_dispatch_port_set);
+ DISPATCH_VERIFY_MIG(kr);
+ if (kr) {
+ _dispatch_bug_mach_client(
+ "_dispatch_port_set_init: mach_port_allocate() failed", kr);
+ DISPATCH_CLIENT_CRASH(
+ "mach_port_allocate() failed: cannot create port set");
}
- if (ds->ds_timer.flags == (DISPATCH_TIMER_ABSOLUTE|DISPATCH_TIMER_WALL_CLOCK)) {
- static const struct timespec t0;
- start = dispatch_walltime(&t0, nanoseconds);
- } else if (ds->ds_timer.flags & DISPATCH_TIMER_WALL_CLOCK) {
- start = dispatch_walltime(DISPATCH_TIME_NOW, nanoseconds);
- } else {
- start = dispatch_time(DISPATCH_TIME_NOW, nanoseconds);
+ kr = mach_port_allocate(mach_task_self(), MACH_PORT_RIGHT_RECEIVE,
+ &_dispatch_event_port);
+ DISPATCH_VERIFY_MIG(kr);
+ if (kr) {
+ _dispatch_bug_mach_client(
+ "_dispatch_port_set_init: mach_port_allocate() failed", kr);
+ DISPATCH_CLIENT_CRASH(
+ "mach_port_allocate() failed: cannot create receive right");
}
- if (ds->ds_timer.flags & (DISPATCH_TIMER_ABSOLUTE|DISPATCH_TIMER_ONESHOT)) {
- // 6866347 - make sure nanoseconds won't overflow
- nanoseconds = INT64_MAX; // non-repeating (~292 years)
+ kr = mach_port_move_member(mach_task_self(), _dispatch_event_port,
+ _dispatch_port_set);
+ DISPATCH_VERIFY_MIG(kr);
+ if (kr) {
+ _dispatch_bug_mach_client(
+ "_dispatch_port_set_init: mach_port_move_member() failed", kr);
+ DISPATCH_CLIENT_CRASH("mach_port_move_member() failed");
}
- dispatch_source_set_timer(ds, start, nanoseconds, leeway);
- return 0;
+
+ kev.ident = _dispatch_port_set;
+
+ _dispatch_update_kq(&kev);
}
-// LEGACY
-uint64_t
-dispatch_event_get_nanoseconds(dispatch_source_t ds)
+static mach_port_t
+_dispatch_get_port_set(void)
{
- if (ds->ds_timer.flags & DISPATCH_TIMER_WALL_CLOCK) {
- return ds->ds_timer.interval;
- } else {
- return _dispatch_time_mach2nano(ds->ds_timer.interval);
+ static dispatch_once_t pred;
+
+ dispatch_once_f(&pred, NULL, _dispatch_port_set_init);
+
+ return _dispatch_port_set;
+}
+
+static kern_return_t
+_dispatch_kevent_machport_enable(dispatch_kevent_t dk)
+{
+ mach_port_t mp = (mach_port_t)dk->dk_kevent.ident;
+ kern_return_t kr;
+
+ _dispatch_debug_machport(mp);
+ kr = mach_port_move_member(mach_task_self(), mp, _dispatch_get_port_set());
+ if (slowpath(kr)) {
+ DISPATCH_VERIFY_MIG(kr);
+ switch (kr) {
+ case KERN_INVALID_NAME:
+#if DISPATCH_DEBUG
+ _dispatch_log("Corruption: Mach receive right 0x%x destroyed "
+ "prematurely", mp);
+#endif
+ break;
+ case KERN_INVALID_RIGHT:
+ _dispatch_bug_mach_client("_dispatch_kevent_machport_enable: "
+ "mach_port_move_member() failed ", kr);
+ break;
+ default:
+ (void)dispatch_assume_zero(kr);
+ break;
+ }
+ }
+ return kr;
+}
+
+static void
+_dispatch_kevent_machport_disable(dispatch_kevent_t dk)
+{
+ mach_port_t mp = (mach_port_t)dk->dk_kevent.ident;
+ kern_return_t kr;
+
+ _dispatch_debug_machport(mp);
+ kr = mach_port_move_member(mach_task_self(), mp, 0);
+ if (slowpath(kr)) {
+ DISPATCH_VERIFY_MIG(kr);
+ switch (kr) {
+ case KERN_INVALID_RIGHT:
+ case KERN_INVALID_NAME:
+#if DISPATCH_DEBUG
+ _dispatch_log("Corruption: Mach receive right 0x%x destroyed "
+ "prematurely", mp);
+#endif
+ break;
+ default:
+ (void)dispatch_assume_zero(kr);
+ break;
+ }
}
}
-#endif /* DISPATCH_NO_LEGACY */
+kern_return_t
+_dispatch_kevent_machport_resume(dispatch_kevent_t dk, uint32_t new_flags,
+ uint32_t del_flags)
+{
+ kern_return_t kr_recv = 0, kr_sp = 0;
+
+ dispatch_assert_zero(new_flags & del_flags);
+ if (new_flags & DISPATCH_MACH_RECV_MESSAGE) {
+ kr_recv = _dispatch_kevent_machport_enable(dk);
+ } else if (del_flags & DISPATCH_MACH_RECV_MESSAGE) {
+ _dispatch_kevent_machport_disable(dk);
+ }
+ if ((new_flags & _DISPATCH_MACH_SP_FLAGS) ||
+ (del_flags & _DISPATCH_MACH_SP_FLAGS)) {
+ // Requesting a (delayed) non-sync send-possible notification
+ // registers for both immediate dead-name notification and delayed-arm
+ // send-possible notification for the port.
+ // The send-possible notification is armed when a mach_msg() with the
+ // the MACH_SEND_NOTIFY to the port times out.
+ // If send-possible is unavailable, fall back to immediate dead-name
+ // registration rdar://problem/2527840&9008724
+ kr_sp = _dispatch_mach_notify_update(dk, new_flags, del_flags,
+ _DISPATCH_MACH_SP_FLAGS, MACH_NOTIFY_SEND_POSSIBLE,
+ MACH_NOTIFY_SEND_POSSIBLE == MACH_NOTIFY_DEAD_NAME ? 1 : 0);
+ }
+
+ return (kr_recv ? kr_recv : kr_sp);
+}
+
+void
+_dispatch_drain_mach_messages(struct kevent *ke)
+{
+ mach_port_t name = (mach_port_name_t)ke->data;
+ dispatch_source_refs_t dri;
+ dispatch_kevent_t dk;
+ struct kevent kev;
+
+ if (!dispatch_assume(name)) {
+ return;
+ }
+ _dispatch_debug_machport(name);
+ dk = _dispatch_kevent_find(name, EVFILT_MACHPORT);
+ if (!dispatch_assume(dk)) {
+ return;
+ }
+ _dispatch_kevent_machport_disable(dk); // emulate EV_DISPATCH
+
+ EV_SET(&kev, name, EVFILT_MACHPORT, EV_ADD|EV_ENABLE|EV_DISPATCH,
+ DISPATCH_MACH_RECV_MESSAGE, 0, dk);
+
+ TAILQ_FOREACH(dri, &dk->dk_sources, dr_list) {
+ _dispatch_source_merge_kevent(_dispatch_source_from_refs(dri), &kev);
+ }
+}
+
+static inline void
+_dispatch_mach_notify_merge(mach_port_t name, uint32_t flag, uint32_t unreg,
+ bool final)
+{
+ dispatch_source_refs_t dri;
+ dispatch_kevent_t dk;
+ struct kevent kev;
+
+ dk = _dispatch_kevent_find(name, EVFILT_MACHPORT);
+ if (!dk) {
+ return;
+ }
+
+ // Update notification registration state.
+ dk->dk_kevent.data &= ~unreg;
+ if (!final) {
+ // Re-register for notification before delivery
+ _dispatch_kevent_resume(dk, flag, 0);
+ }
+
+ EV_SET(&kev, name, EVFILT_MACHPORT, EV_ADD|EV_ENABLE, flag, 0, dk);
+
+ TAILQ_FOREACH(dri, &dk->dk_sources, dr_list) {
+ _dispatch_source_merge_kevent(_dispatch_source_from_refs(dri), &kev);
+ if (final) {
+ // this can never happen again
+ // this must happen after the merge
+ // this may be racy in the future, but we don't provide a 'setter'
+ // API for the mask yet
+ _dispatch_source_from_refs(dri)->ds_pending_data_mask &= ~unreg;
+ }
+ }
+
+ if (final) {
+ // no more sources have these flags
+ dk->dk_kevent.fflags &= ~unreg;
+ }
+}
+
+static kern_return_t
+_dispatch_mach_notify_update(dispatch_kevent_t dk, uint32_t new_flags,
+ uint32_t del_flags, uint32_t mask, mach_msg_id_t notify_msgid,
+ mach_port_mscount_t notify_sync)
+{
+ mach_port_t previous, port = (mach_port_t)dk->dk_kevent.ident;
+ typeof(dk->dk_kevent.data) prev = dk->dk_kevent.data;
+ kern_return_t kr, krr = 0;
+
+ // Update notification registration state.
+ dk->dk_kevent.data |= (new_flags | dk->dk_kevent.fflags) & mask;
+ dk->dk_kevent.data &= ~(del_flags & mask);
+
+ _dispatch_debug_machport(port);
+ if ((dk->dk_kevent.data & mask) && !(prev & mask)) {
+ previous = MACH_PORT_NULL;
+ krr = mach_port_request_notification(mach_task_self(), port,
+ notify_msgid, notify_sync, _dispatch_event_port,
+ MACH_MSG_TYPE_MAKE_SEND_ONCE, &previous);
+ DISPATCH_VERIFY_MIG(krr);
+
+ switch(krr) {
+ case KERN_INVALID_NAME:
+ case KERN_INVALID_RIGHT:
+ // Supress errors & clear registration state
+ dk->dk_kevent.data &= ~mask;
+ break;
+ default:
+ // Else, we dont expect any errors from mach. Log any errors
+ if (dispatch_assume_zero(krr)) {
+ // log the error & clear registration state
+ dk->dk_kevent.data &= ~mask;
+ } else if (dispatch_assume_zero(previous)) {
+ // Another subsystem has beat libdispatch to requesting the
+ // specified Mach notification on this port. We should
+ // technically cache the previous port and message it when the
+ // kernel messages our port. Or we can just say screw those
+ // subsystems and deallocate the previous port.
+ // They should adopt libdispatch :-P
+ kr = mach_port_deallocate(mach_task_self(), previous);
+ DISPATCH_VERIFY_MIG(kr);
+ (void)dispatch_assume_zero(kr);
+ previous = MACH_PORT_NULL;
+ }
+ }
+ } else if (!(dk->dk_kevent.data & mask) && (prev & mask)) {
+ previous = MACH_PORT_NULL;
+ kr = mach_port_request_notification(mach_task_self(), port,
+ notify_msgid, notify_sync, MACH_PORT_NULL,
+ MACH_MSG_TYPE_MOVE_SEND_ONCE, &previous);
+ DISPATCH_VERIFY_MIG(kr);
+
+ switch (kr) {
+ case KERN_INVALID_NAME:
+ case KERN_INVALID_RIGHT:
+ case KERN_INVALID_ARGUMENT:
+ break;
+ default:
+ if (dispatch_assume_zero(kr)) {
+ // log the error
+ }
+ }
+ } else {
+ return 0;
+ }
+ if (slowpath(previous)) {
+ // the kernel has not consumed the send-once right yet
+ (void)dispatch_assume_zero(
+ _dispatch_send_consume_send_once_right(previous));
+ }
+ return krr;
+}
+
+static void
+_dispatch_mach_notify_source2(void *context)
+{
+ dispatch_source_t ds = context;
+ size_t maxsz = MAX(sizeof(union
+ __RequestUnion___dispatch_send_libdispatch_internal_protocol_subsystem),
+ sizeof(union
+ __ReplyUnion___dispatch_libdispatch_internal_protocol_subsystem));
+
+ dispatch_mig_server(ds, maxsz, libdispatch_internal_protocol_server);
+}
+
+void
+_dispatch_mach_notify_source_init(void *context DISPATCH_UNUSED)
+{
+ _dispatch_get_port_set();
+
+ _dispatch_mach_notify_source = dispatch_source_create(
+ DISPATCH_SOURCE_TYPE_MACH_RECV, _dispatch_event_port, 0,
+ &_dispatch_mgr_q);
+ dispatch_assert(_dispatch_mach_notify_source);
+ dispatch_set_context(_dispatch_mach_notify_source,
+ _dispatch_mach_notify_source);
+ dispatch_source_set_event_handler_f(_dispatch_mach_notify_source,
+ _dispatch_mach_notify_source2);
+ dispatch_resume(_dispatch_mach_notify_source);
+}
+
+kern_return_t
+_dispatch_mach_notify_port_deleted(mach_port_t notify DISPATCH_UNUSED,
+ mach_port_name_t name)
+{
+#if DISPATCH_DEBUG
+ _dispatch_log("Corruption: Mach send/send-once/dead-name right 0x%x "
+ "deleted prematurely", name);
+#endif
+
+ _dispatch_debug_machport(name);
+ _dispatch_mach_notify_merge(name, DISPATCH_MACH_SEND_DELETED,
+ _DISPATCH_MACH_SP_FLAGS, true);
+
+ return KERN_SUCCESS;
+}
+
+kern_return_t
+_dispatch_mach_notify_dead_name(mach_port_t notify DISPATCH_UNUSED,
+ mach_port_name_t name)
+{
+ kern_return_t kr;
+
+#if DISPATCH_DEBUG
+ _dispatch_log("machport[0x%08x]: dead-name notification: %s",
+ name, __func__);
+#endif
+ _dispatch_debug_machport(name);
+ _dispatch_mach_notify_merge(name, DISPATCH_MACH_SEND_DEAD,
+ _DISPATCH_MACH_SP_FLAGS, true);
+
+ // the act of receiving a dead name notification allocates a dead-name
+ // right that must be deallocated
+ kr = mach_port_deallocate(mach_task_self(), name);
+ DISPATCH_VERIFY_MIG(kr);
+ //(void)dispatch_assume_zero(kr);
+
+ return KERN_SUCCESS;
+}
+
+kern_return_t
+_dispatch_mach_notify_send_possible(mach_port_t notify DISPATCH_UNUSED,
+ mach_port_name_t name)
+{
+#if DISPATCH_DEBUG
+ _dispatch_log("machport[0x%08x]: send-possible notification: %s",
+ name, __func__);
+#endif
+ _dispatch_debug_machport(name);
+ _dispatch_mach_notify_merge(name, DISPATCH_MACH_SEND_POSSIBLE,
+ _DISPATCH_MACH_SP_FLAGS, false);
+
+ return KERN_SUCCESS;
+}
+
+mach_msg_return_t
+dispatch_mig_server(dispatch_source_t ds, size_t maxmsgsz,
+ dispatch_mig_callback_t callback)
+{
+ mach_msg_options_t options = MACH_RCV_MSG | MACH_RCV_TIMEOUT
+ | MACH_RCV_TRAILER_ELEMENTS(MACH_RCV_TRAILER_CTX)
+ | MACH_RCV_TRAILER_TYPE(MACH_MSG_TRAILER_FORMAT_0);
+ mach_msg_options_t tmp_options;
+ mig_reply_error_t *bufTemp, *bufRequest, *bufReply;
+ mach_msg_return_t kr = 0;
+ unsigned int cnt = 1000; // do not stall out serial queues
+ int demux_success;
+ bool received = false;
+ size_t rcv_size = maxmsgsz + MAX_TRAILER_SIZE;
+
+ // XXX FIXME -- allocate these elsewhere
+ bufRequest = alloca(rcv_size);
+ bufReply = alloca(rcv_size);
+ bufReply->Head.msgh_size = 0; // make CLANG happy
+ bufRequest->RetCode = 0;
+
+#if DISPATCH_DEBUG
+ options |= MACH_RCV_LARGE; // rdar://problem/8422992
+#endif
+ tmp_options = options;
+ // XXX FIXME -- change this to not starve out the target queue
+ for (;;) {
+ if (DISPATCH_OBJECT_SUSPENDED(ds) || (--cnt == 0)) {
+ options &= ~MACH_RCV_MSG;
+ tmp_options &= ~MACH_RCV_MSG;
+
+ if (!(tmp_options & MACH_SEND_MSG)) {
+ break;
+ }
+ }
+ kr = mach_msg(&bufReply->Head, tmp_options, bufReply->Head.msgh_size,
+ (mach_msg_size_t)rcv_size, (mach_port_t)ds->ds_ident_hack, 0,0);
+
+ tmp_options = options;
+
+ if (slowpath(kr)) {
+ switch (kr) {
+ case MACH_SEND_INVALID_DEST:
+ case MACH_SEND_TIMED_OUT:
+ if (bufReply->Head.msgh_bits & MACH_MSGH_BITS_COMPLEX) {
+ mach_msg_destroy(&bufReply->Head);
+ }
+ break;
+ case MACH_RCV_TIMED_OUT:
+ // Don't return an error if a message was sent this time or
+ // a message was successfully received previously
+ // rdar://problems/7363620&7791738
+ if(bufReply->Head.msgh_remote_port || received) {
+ kr = MACH_MSG_SUCCESS;
+ }
+ break;
+ case MACH_RCV_INVALID_NAME:
+ break;
+#if DISPATCH_DEBUG
+ case MACH_RCV_TOO_LARGE:
+ // receive messages that are too large and log their id and size
+ // rdar://problem/8422992
+ tmp_options &= ~MACH_RCV_LARGE;
+ size_t large_size = bufReply->Head.msgh_size + MAX_TRAILER_SIZE;
+ void *large_buf = malloc(large_size);
+ if (large_buf) {
+ rcv_size = large_size;
+ bufReply = large_buf;
+ }
+ if (!mach_msg(&bufReply->Head, tmp_options, 0,
+ (mach_msg_size_t)rcv_size,
+ (mach_port_t)ds->ds_ident_hack, 0, 0)) {
+ _dispatch_log("BUG in libdispatch client: "
+ "dispatch_mig_server received message larger than "
+ "requested size %zd: id = 0x%x, size = %d",
+ maxmsgsz, bufReply->Head.msgh_id,
+ bufReply->Head.msgh_size);
+ }
+ if (large_buf) {
+ free(large_buf);
+ }
+ // fall through
+#endif
+ default:
+ _dispatch_bug_mach_client(
+ "dispatch_mig_server: mach_msg() failed", kr);
+ break;
+ }
+ break;
+ }
+
+ if (!(tmp_options & MACH_RCV_MSG)) {
+ break;
+ }
+ received = true;
+
+ bufTemp = bufRequest;
+ bufRequest = bufReply;
+ bufReply = bufTemp;
+
+ demux_success = callback(&bufRequest->Head, &bufReply->Head);
+
+ if (!demux_success) {
+ // destroy the request - but not the reply port
+ bufRequest->Head.msgh_remote_port = 0;
+ mach_msg_destroy(&bufRequest->Head);
+ } else if (!(bufReply->Head.msgh_bits & MACH_MSGH_BITS_COMPLEX)) {
+ // if MACH_MSGH_BITS_COMPLEX is _not_ set, then bufReply->RetCode
+ // is present
+ if (slowpath(bufReply->RetCode)) {
+ if (bufReply->RetCode == MIG_NO_REPLY) {
+ continue;
+ }
+
+ // destroy the request - but not the reply port
+ bufRequest->Head.msgh_remote_port = 0;
+ mach_msg_destroy(&bufRequest->Head);
+ }
+ }
+
+ if (bufReply->Head.msgh_remote_port) {
+ tmp_options |= MACH_SEND_MSG;
+ if (MACH_MSGH_BITS_REMOTE(bufReply->Head.msgh_bits) !=
+ MACH_MSG_TYPE_MOVE_SEND_ONCE) {
+ tmp_options |= MACH_SEND_TIMEOUT;
+ }
+ }
+ }
+
+ return kr;
+}
+
+#endif /* HAVE_MACH */
+
+#pragma mark -
+#pragma mark dispatch_source_debug
+
+DISPATCH_NOINLINE
+static const char *
+_evfiltstr(short filt)
+{
+ switch (filt) {
+#define _evfilt2(f) case (f): return #f
+ _evfilt2(EVFILT_READ);
+ _evfilt2(EVFILT_WRITE);
+ _evfilt2(EVFILT_AIO);
+ _evfilt2(EVFILT_VNODE);
+ _evfilt2(EVFILT_PROC);
+ _evfilt2(EVFILT_SIGNAL);
+ _evfilt2(EVFILT_TIMER);
+#ifdef EVFILT_VM
+ _evfilt2(EVFILT_VM);
+#endif
+#if HAVE_MACH
+ _evfilt2(EVFILT_MACHPORT);
+#endif
+ _evfilt2(EVFILT_FS);
+ _evfilt2(EVFILT_USER);
+
+ _evfilt2(DISPATCH_EVFILT_TIMER);
+ _evfilt2(DISPATCH_EVFILT_CUSTOM_ADD);
+ _evfilt2(DISPATCH_EVFILT_CUSTOM_OR);
+ default:
+ return "EVFILT_missing";
+ }
+}
+
+static size_t
+_dispatch_source_debug_attr(dispatch_source_t ds, char* buf, size_t bufsiz)
+{
+ dispatch_queue_t target = ds->do_targetq;
+ return snprintf(buf, bufsiz, "target = %s[%p], pending_data = 0x%lx, "
+ "pending_data_mask = 0x%lx, ",
+ target ? target->dq_label : "", target,
+ ds->ds_pending_data, ds->ds_pending_data_mask);
+}
+
+static size_t
+_dispatch_timer_debug_attr(dispatch_source_t ds, char* buf, size_t bufsiz)
+{
+ dispatch_source_refs_t dr = ds->ds_refs;
+ return snprintf(buf, bufsiz, "timer = { target = 0x%llx, "
+ "last_fire = 0x%llx, interval = 0x%llx, flags = 0x%llx }, ",
+ ds_timer(dr).target, ds_timer(dr).last_fire, ds_timer(dr).interval,
+ ds_timer(dr).flags);
+}
+
+static size_t
+_dispatch_source_debug(dispatch_source_t ds, char* buf, size_t bufsiz)
+{
+ size_t offset = 0;
+ offset += snprintf(&buf[offset], bufsiz - offset, "%s[%p] = { ",
+ dx_kind(ds), ds);
+ offset += _dispatch_object_debug_attr(ds, &buf[offset], bufsiz - offset);
+ offset += _dispatch_source_debug_attr(ds, &buf[offset], bufsiz - offset);
+ if (ds->ds_is_timer) {
+ offset += _dispatch_timer_debug_attr(ds, &buf[offset], bufsiz - offset);
+ }
+ return offset;
+}
+
+static size_t
+_dispatch_source_kevent_debug(dispatch_source_t ds, char* buf, size_t bufsiz)
+{
+ size_t offset = _dispatch_source_debug(ds, buf, bufsiz);
+ offset += snprintf(&buf[offset], bufsiz - offset, "filter = %s }",
+ ds->ds_dkev ? _evfiltstr(ds->ds_dkev->dk_kevent.filter) : "????");
+ return offset;
+}
+
+#if DISPATCH_DEBUG
+void
+dispatch_debug_kevents(struct kevent* kev, size_t count, const char* str)
+{
+ size_t i;
+ for (i = 0; i < count; ++i) {
+ _dispatch_log("kevent[%lu] = { ident = %p, filter = %s, flags = 0x%x, "
+ "fflags = 0x%x, data = %p, udata = %p }: %s",
+ i, (void*)kev[i].ident, _evfiltstr(kev[i].filter), kev[i].flags,
+ kev[i].fflags, (void*)kev[i].data, (void*)kev[i].udata, str);
+ }
+}
+
+static void
+_dispatch_kevent_debugger2(void *context)
+{
+ struct sockaddr sa;
+ socklen_t sa_len = sizeof(sa);
+ int c, fd = (int)(long)context;
+ unsigned int i;
+ dispatch_kevent_t dk;
+ dispatch_source_t ds;
+ dispatch_source_refs_t dr;
+ FILE *debug_stream;
+
+ c = accept(fd, &sa, &sa_len);
+ if (c == -1) {
+ if (errno != EAGAIN) {
+ (void)dispatch_assume_zero(errno);
+ }
+ return;
+ }
+#if 0
+ int r = fcntl(c, F_SETFL, 0); // disable non-blocking IO
+ if (r == -1) {
+ (void)dispatch_assume_zero(errno);
+ }
+#endif
+ debug_stream = fdopen(c, "a");
+ if (!dispatch_assume(debug_stream)) {
+ close(c);
+ return;
+ }
+
+ fprintf(debug_stream, "HTTP/1.0 200 OK\r\n");
+ fprintf(debug_stream, "Content-type: text/html\r\n");
+ fprintf(debug_stream, "Pragma: nocache\r\n");
+ fprintf(debug_stream, "\r\n");
+ fprintf(debug_stream, "<html>\n");
+ fprintf(debug_stream, "<head><title>PID %u</title></head>\n", getpid());
+ fprintf(debug_stream, "<body>\n<ul>\n");
+
+ //fprintf(debug_stream, "<tr><td>DK</td><td>DK</td><td>DK</td><td>DK</td>"
+ // "<td>DK</td><td>DK</td><td>DK</td></tr>\n");
+
+ for (i = 0; i < DSL_HASH_SIZE; i++) {
+ if (TAILQ_EMPTY(&_dispatch_sources[i])) {
+ continue;
+ }
+ TAILQ_FOREACH(dk, &_dispatch_sources[i], dk_list) {
+ fprintf(debug_stream, "\t<br><li>DK %p ident %lu filter %s flags "
+ "0x%hx fflags 0x%x data 0x%lx udata %p\n",
+ dk, (unsigned long)dk->dk_kevent.ident,
+ _evfiltstr(dk->dk_kevent.filter), dk->dk_kevent.flags,
+ dk->dk_kevent.fflags, (unsigned long)dk->dk_kevent.data,
+ dk->dk_kevent.udata);
+ fprintf(debug_stream, "\t\t<ul>\n");
+ TAILQ_FOREACH(dr, &dk->dk_sources, dr_list) {
+ ds = _dispatch_source_from_refs(dr);
+ fprintf(debug_stream, "\t\t\t<li>DS %p refcnt 0x%x suspend "
+ "0x%x data 0x%lx mask 0x%lx flags 0x%x</li>\n",
+ ds, ds->do_ref_cnt, ds->do_suspend_cnt,
+ ds->ds_pending_data, ds->ds_pending_data_mask,
+ ds->ds_atomic_flags);
+ if (ds->do_suspend_cnt == DISPATCH_OBJECT_SUSPEND_LOCK) {
+ dispatch_queue_t dq = ds->do_targetq;
+ fprintf(debug_stream, "\t\t<br>DQ: %p refcnt 0x%x suspend "
+ "0x%x label: %s\n", dq, dq->do_ref_cnt,
+ dq->do_suspend_cnt, dq->dq_label);
+ }
+ }
+ fprintf(debug_stream, "\t\t</ul>\n");
+ fprintf(debug_stream, "\t</li>\n");
+ }
+ }
+ fprintf(debug_stream, "</ul>\n</body>\n</html>\n");
+ fflush(debug_stream);
+ fclose(debug_stream);
+}
+
+static void
+_dispatch_kevent_debugger2_cancel(void *context)
+{
+ int ret, fd = (int)(long)context;
+
+ ret = close(fd);
+ if (ret != -1) {
+ (void)dispatch_assume_zero(errno);
+ }
+}
+
+static void
+_dispatch_kevent_debugger(void *context DISPATCH_UNUSED)
+{
+ union {
+ struct sockaddr_in sa_in;
+ struct sockaddr sa;
+ } sa_u = {
+ .sa_in = {
+ .sin_family = AF_INET,
+ .sin_addr = { htonl(INADDR_LOOPBACK), },
+ },
+ };
+ dispatch_source_t ds;
+ const char *valstr;
+ int val, r, fd, sock_opt = 1;
+ socklen_t slen = sizeof(sa_u);
+
+ if (issetugid()) {
+ return;
+ }
+ valstr = getenv("LIBDISPATCH_DEBUGGER");
+ if (!valstr) {
+ return;
+ }
+ val = atoi(valstr);
+ if (val == 2) {
+ sa_u.sa_in.sin_addr.s_addr = 0;
+ }
+ fd = socket(PF_INET, SOCK_STREAM, 0);
+ if (fd == -1) {
+ (void)dispatch_assume_zero(errno);
+ return;
+ }
+ r = setsockopt(fd, SOL_SOCKET, SO_REUSEADDR, (void *)&sock_opt,
+ (socklen_t) sizeof sock_opt);
+ if (r == -1) {
+ (void)dispatch_assume_zero(errno);
+ goto out_bad;
+ }
+#if 0
+ r = fcntl(fd, F_SETFL, O_NONBLOCK);
+ if (r == -1) {
+ (void)dispatch_assume_zero(errno);
+ goto out_bad;
+ }
+#endif
+ r = bind(fd, &sa_u.sa, sizeof(sa_u));
+ if (r == -1) {
+ (void)dispatch_assume_zero(errno);
+ goto out_bad;
+ }
+ r = listen(fd, SOMAXCONN);
+ if (r == -1) {
+ (void)dispatch_assume_zero(errno);
+ goto out_bad;
+ }
+ r = getsockname(fd, &sa_u.sa, &slen);
+ if (r == -1) {
+ (void)dispatch_assume_zero(errno);
+ goto out_bad;
+ }
+
+ ds = dispatch_source_create(DISPATCH_SOURCE_TYPE_READ, fd, 0,
+ &_dispatch_mgr_q);
+ if (dispatch_assume(ds)) {
+ _dispatch_log("LIBDISPATCH: debug port: %hu",
+ (in_port_t)ntohs(sa_u.sa_in.sin_port));
+
+ /* ownership of fd transfers to ds */
+ dispatch_set_context(ds, (void *)(long)fd);
+ dispatch_source_set_event_handler_f(ds, _dispatch_kevent_debugger2);
+ dispatch_source_set_cancel_handler_f(ds,
+ _dispatch_kevent_debugger2_cancel);
+ dispatch_resume(ds);
+
+ return;
+ }
+out_bad:
+ close(fd);
+}
+
+#if HAVE_MACH
+
+#ifndef MACH_PORT_TYPE_SPREQUEST
+#define MACH_PORT_TYPE_SPREQUEST 0x40000000
+#endif
+
+void
+dispatch_debug_machport(mach_port_t name, const char* str)
+{
+ mach_port_type_t type;
+ mach_msg_bits_t ns = 0, nr = 0, nso = 0, nd = 0;
+ unsigned int dnreqs = 0, dnrsiz;
+ kern_return_t kr = mach_port_type(mach_task_self(), name, &type);
+ if (kr) {
+ _dispatch_log("machport[0x%08x] = { error(0x%x) \"%s\" }: %s", name,
+ kr, mach_error_string(kr), str);
+ return;
+ }
+ if (type & MACH_PORT_TYPE_SEND) {
+ (void)dispatch_assume_zero(mach_port_get_refs(mach_task_self(), name,
+ MACH_PORT_RIGHT_SEND, &ns));
+ }
+ if (type & MACH_PORT_TYPE_SEND_ONCE) {
+ (void)dispatch_assume_zero(mach_port_get_refs(mach_task_self(), name,
+ MACH_PORT_RIGHT_SEND_ONCE, &nso));
+ }
+ if (type & MACH_PORT_TYPE_DEAD_NAME) {
+ (void)dispatch_assume_zero(mach_port_get_refs(mach_task_self(), name,
+ MACH_PORT_RIGHT_DEAD_NAME, &nd));
+ }
+ if (type & (MACH_PORT_TYPE_RECEIVE|MACH_PORT_TYPE_SEND|
+ MACH_PORT_TYPE_SEND_ONCE)) {
+ (void)dispatch_assume_zero(mach_port_dnrequest_info(mach_task_self(),
+ name, &dnrsiz, &dnreqs));
+ }
+ if (type & MACH_PORT_TYPE_RECEIVE) {
+ mach_port_status_t status = { .mps_pset = 0, };
+ mach_msg_type_number_t cnt = MACH_PORT_RECEIVE_STATUS_COUNT;
+ (void)dispatch_assume_zero(mach_port_get_refs(mach_task_self(), name,
+ MACH_PORT_RIGHT_RECEIVE, &nr));
+ (void)dispatch_assume_zero(mach_port_get_attributes(mach_task_self(),
+ name, MACH_PORT_RECEIVE_STATUS, (void*)&status, &cnt));
+ _dispatch_log("machport[0x%08x] = { R(%03u) S(%03u) SO(%03u) D(%03u) "
+ "dnreqs(%03u) spreq(%s) nsreq(%s) pdreq(%s) srights(%s) "
+ "sorights(%03u) qlim(%03u) msgcount(%03u) mkscount(%03u) "
+ "seqno(%03u) }: %s", name, nr, ns, nso, nd, dnreqs,
+ type & MACH_PORT_TYPE_SPREQUEST ? "Y":"N",
+ status.mps_nsrequest ? "Y":"N", status.mps_pdrequest ? "Y":"N",
+ status.mps_srights ? "Y":"N", status.mps_sorights,
+ status.mps_qlimit, status.mps_msgcount, status.mps_mscount,
+ status.mps_seqno, str);
+ } else if (type & (MACH_PORT_TYPE_SEND|MACH_PORT_TYPE_SEND_ONCE|
+ MACH_PORT_TYPE_DEAD_NAME)) {
+ _dispatch_log("machport[0x%08x] = { R(%03u) S(%03u) SO(%03u) D(%03u) "
+ "dnreqs(%03u) spreq(%s) }: %s", name, nr, ns, nso, nd, dnreqs,
+ type & MACH_PORT_TYPE_SPREQUEST ? "Y":"N", str);
+ } else {
+ _dispatch_log("machport[0x%08x] = { type(0x%08x) }: %s", name, type,
+ str);
+ }
+}
+
+#endif // HAVE_MACH
+
+#endif // DISPATCH_DEBUG
diff --git a/src/source_internal.h b/src/source_internal.h
index 7ba2048..a44eef7 100644
--- a/src/source_internal.h
+++ b/src/source_internal.h
@@ -1,20 +1,20 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2011 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
@@ -32,30 +32,103 @@
#include <dispatch/base.h> // for HeaderDoc
#endif
+// NOTE: dispatch_source_mach_send_flags_t and dispatch_source_mach_recv_flags_t
+// bit values must not overlap as they share the same kevent fflags !
+
+/*!
+ * @enum dispatch_source_mach_send_flags_t
+ *
+ * @constant DISPATCH_MACH_SEND_DELETED
+ * Port-deleted notification. Disabled for source registration.
+ */
+enum {
+ DISPATCH_MACH_SEND_DELETED = 0x4,
+};
+/*!
+ * @enum dispatch_source_mach_recv_flags_t
+ *
+ * @constant DISPATCH_MACH_RECV_MESSAGE
+ * Receive right has pending messages
+ *
+ * @constant DISPATCH_MACH_RECV_NO_SENDERS
+ * Receive right has no more senders. TODO <rdar://problem/8132399>
+ */
+enum {
+ DISPATCH_MACH_RECV_MESSAGE = 0x2,
+ DISPATCH_MACH_RECV_NO_SENDERS = 0x10,
+};
+
+enum {
+ DISPATCH_TIMER_WALL_CLOCK = 0x4,
+};
+
+#define DISPATCH_EVFILT_TIMER (-EVFILT_SYSCOUNT - 1)
+#define DISPATCH_EVFILT_CUSTOM_ADD (-EVFILT_SYSCOUNT - 2)
+#define DISPATCH_EVFILT_CUSTOM_OR (-EVFILT_SYSCOUNT - 3)
+#define DISPATCH_EVFILT_SYSCOUNT ( EVFILT_SYSCOUNT + 3)
+
+#define DISPATCH_TIMER_INDEX_WALL 0
+#define DISPATCH_TIMER_INDEX_MACH 1
+#define DISPATCH_TIMER_INDEX_DISARM 2
+
struct dispatch_source_vtable_s {
DISPATCH_VTABLE_HEADER(dispatch_source_s);
};
extern const struct dispatch_source_vtable_s _dispatch_source_kevent_vtable;
-struct dispatch_kevent_s;
+struct dispatch_kevent_s {
+ TAILQ_ENTRY(dispatch_kevent_s) dk_list;
+ TAILQ_HEAD(, dispatch_source_refs_s) dk_sources;
+ struct kevent dk_kevent;
+};
+
typedef struct dispatch_kevent_s *dispatch_kevent_t;
+struct dispatch_source_type_s {
+ struct kevent ke;
+ uint64_t mask;
+ void (*init)(dispatch_source_t ds, dispatch_source_type_t type,
+ uintptr_t handle, unsigned long mask, dispatch_queue_t q);
+};
+
struct dispatch_timer_source_s {
uint64_t target;
- uint64_t start;
+ uint64_t last_fire;
uint64_t interval;
uint64_t leeway;
uint64_t flags; // dispatch_timer_flags_t
+ unsigned long missed;
};
-struct dispatch_set_timer_params {
- dispatch_source_t ds;
- uintptr_t ident;
- struct dispatch_timer_source_s values;
+// Source state which may contain references to the source object
+// Separately allocated so that 'leaks' can see sources <rdar://problem/9050566>
+struct dispatch_source_refs_s {
+ TAILQ_ENTRY(dispatch_source_refs_s) dr_list;
+ uintptr_t dr_source_wref; // "weak" backref to dispatch_source_t
+ dispatch_function_t ds_handler_func;
+ void *ds_handler_ctxt;
+ void *ds_cancel_handler;
+ void *ds_registration_handler;
};
+typedef struct dispatch_source_refs_s *dispatch_source_refs_t;
+
+struct dispatch_timer_source_refs_s {
+ struct dispatch_source_refs_s _ds_refs;
+ struct dispatch_timer_source_s _ds_timer;
+};
+
+#define _dispatch_ptr2wref(ptr) (~(uintptr_t)(ptr))
+#define _dispatch_wref2ptr(ref) ((void*)~(ref))
+#define _dispatch_source_from_refs(dr) \
+ ((dispatch_source_t)_dispatch_wref2ptr((dr)->dr_source_wref))
+#define ds_timer(dr) \
+ (((struct dispatch_timer_source_refs_s *)(dr))->_ds_timer)
+
+// ds_atomic_flags bits
#define DSF_CANCELED 1u // cancellation has been requested
+#define DSF_ARMED 2u // source is armed
struct dispatch_source_s {
DISPATCH_STRUCT_HEADER(dispatch_source_s, dispatch_source_vtable_s);
@@ -67,92 +140,26 @@
struct {
char dq_label[8];
dispatch_kevent_t ds_dkev;
-
- dispatch_function_t ds_handler_func;
- void *ds_handler_ctxt;
-
- void *ds_cancel_handler;
-
- unsigned int ds_is_level:1,
- ds_is_adder:1,
- ds_is_installed:1,
- ds_needs_rearm:1,
- ds_is_armed:1,
- ds_is_legacy:1,
- ds_cancel_is_block:1,
- ds_handler_is_block:1;
-
+ dispatch_source_refs_t ds_refs;
unsigned int ds_atomic_flags;
-
+ unsigned int
+ ds_is_level:1,
+ ds_is_adder:1,
+ ds_is_installed:1,
+ ds_needs_rearm:1,
+ ds_is_timer:1,
+ ds_cancel_is_block:1,
+ ds_handler_is_block:1,
+ ds_registration_is_block:1;
unsigned long ds_data;
unsigned long ds_pending_data;
unsigned long ds_pending_data_mask;
-
- TAILQ_ENTRY(dispatch_source_s) ds_list;
-
unsigned long ds_ident_hack;
-
- struct dispatch_timer_source_s ds_timer;
};
};
};
-
void _dispatch_source_xref_release(dispatch_source_t ds);
-dispatch_queue_t _dispatch_source_invoke(dispatch_source_t ds);
-bool _dispatch_source_probe(dispatch_source_t ds);
-void _dispatch_source_dispose(dispatch_source_t ds);
-size_t _dispatch_source_debug(dispatch_source_t ds, char* buf, size_t bufsiz);
-
-void _dispatch_source_kevent_resume(dispatch_source_t ds, uint32_t new_flags, uint32_t del_flags);
-void _dispatch_kevent_merge(dispatch_source_t ds);
-void _dispatch_kevent_release(dispatch_source_t ds);
-void _dispatch_timer_list_update(dispatch_source_t ds);
-
-struct dispatch_source_type_s {
- void *opaque;
- uint64_t mask;
- bool (*init) (dispatch_source_t ds,
- dispatch_source_type_t type,
- uintptr_t handle,
- unsigned long mask,
- dispatch_queue_t q);
-};
-
-#define DISPATCH_TIMER_INDEX_WALL 0
-#define DISPATCH_TIMER_INDEX_MACH 1
-
-#ifdef DISPATCH_NO_LEGACY
-enum {
- DISPATCH_TIMER_WALL_CLOCK = 0x4,
-};
-enum {
- DISPATCH_TIMER_INTERVAL = 0x0,
- DISPATCH_TIMER_ONESHOT = 0x1,
- DISPATCH_TIMER_ABSOLUTE = 0x3,
-};
-enum {
- DISPATCH_MACHPORT_DEAD = 0x1,
- DISPATCH_MACHPORT_RECV = 0x2,
- DISPATCH_MACHPORT_DELETED = 0x4,
-};
-#endif
-
-
-extern const struct dispatch_source_type_s _dispatch_source_type_timer;
-extern const struct dispatch_source_type_s _dispatch_source_type_read;
-extern const struct dispatch_source_type_s _dispatch_source_type_write;
-extern const struct dispatch_source_type_s _dispatch_source_type_proc;
-extern const struct dispatch_source_type_s _dispatch_source_type_signal;
-extern const struct dispatch_source_type_s _dispatch_source_type_vnode;
-extern const struct dispatch_source_type_s _dispatch_source_type_vfs;
-
-#if HAVE_MACH
-extern const struct dispatch_source_type_s _dispatch_source_type_mach_send;
-extern const struct dispatch_source_type_s _dispatch_source_type_mach_recv;
-#endif
-
-extern const struct dispatch_source_type_s _dispatch_source_type_data_add;
-extern const struct dispatch_source_type_s _dispatch_source_type_data_or;
+void _dispatch_mach_notify_source_init(void *context);
#endif /* __DISPATCH_SOURCE_INTERNAL__ */
diff --git a/src/time.c b/src/time.c
index f4c2d86..4c0285a 100644
--- a/src/time.c
+++ b/src/time.c
@@ -1,46 +1,50 @@
/*
- * Copyright (c) 2008-2009 Apple Inc. All rights reserved.
+ * Copyright (c) 2008-2010 Apple Inc. All rights reserved.
*
* @APPLE_APACHE_LICENSE_HEADER_START@
- *
+ *
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
- *
+ *
* http://www.apache.org/licenses/LICENSE-2.0
- *
+ *
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
- *
+ *
* @APPLE_APACHE_LICENSE_HEADER_END@
*/
#include "internal.h"
-uint64_t
+uint64_t
_dispatch_get_nanoseconds(void)
{
-#if !TARGET_OS_WIN32
struct timeval now;
int r = gettimeofday(&now, NULL);
dispatch_assert_zero(r);
dispatch_assert(sizeof(NSEC_PER_SEC) == 8);
dispatch_assert(sizeof(NSEC_PER_USEC) == 8);
return now.tv_sec * NSEC_PER_SEC + now.tv_usec * NSEC_PER_USEC;
-#else /* TARGET_OS_WIN32 */
- // FILETIME is 100-nanosecond intervals since January 1, 1601 (UTC).
- FILETIME ft;
- ULARGE_INTEGER li;
- GetSystemTimeAsFileTime(&ft);
- li.LowPart = ft.dwLowDateTime;
- li.HighPart = ft.dwHighDateTime;
- return li.QuadPart * 100ull;
-#endif /* TARGET_OS_WIN32 */
}
+#if !(defined(__i386__) || defined(__x86_64__) || !HAVE_MACH_ABSOLUTE_TIME)
+DISPATCH_CACHELINE_ALIGN _dispatch_host_time_data_s _dispatch_host_time_data;
+
+void
+_dispatch_get_host_time_init(void *context DISPATCH_UNUSED)
+{
+ mach_timebase_info_data_t tbi;
+ (void)dispatch_assume_zero(mach_timebase_info(&tbi));
+ _dispatch_host_time_data.frac = tbi.numer;
+ _dispatch_host_time_data.frac /= tbi.denom;
+ _dispatch_host_time_data.ratio_1_to_1 = (tbi.numer == tbi.denom);
+}
+#endif
+
dispatch_time_t
dispatch_time(dispatch_time_t inval, int64_t delta)
{
@@ -51,29 +55,29 @@
// wall clock
if (delta >= 0) {
if ((int64_t)(inval -= delta) >= 0) {
- return DISPATCH_TIME_FOREVER; // overflow
+ return DISPATCH_TIME_FOREVER; // overflow
}
return inval;
}
if ((int64_t)(inval -= delta) >= -1) {
// -1 is special == DISPATCH_TIME_FOREVER == forever
- return -2; // underflow
+ return -2; // underflow
}
return inval;
}
// mach clock
delta = _dispatch_time_nano2mach(delta);
- if (inval == 0) {
+ if (inval == 0) {
inval = _dispatch_absolute_time();
}
if (delta >= 0) {
if ((int64_t)(inval += delta) <= 0) {
- return DISPATCH_TIME_FOREVER; // overflow
+ return DISPATCH_TIME_FOREVER; // overflow
}
return inval;
}
if ((int64_t)(inval += delta) < 1) {
- return 1; // underflow
+ return 1; // underflow
}
return inval;
}
@@ -82,7 +86,7 @@
dispatch_walltime(const struct timespec *inval, int64_t delta)
{
int64_t nsec;
-
+
if (inval) {
nsec = inval->tv_sec * 1000000000ull + inval->tv_nsec;
} else {
@@ -117,48 +121,3 @@
now = _dispatch_absolute_time();
return now >= when ? 0 : _dispatch_time_mach2nano(when - now);
}
-
-#if USE_POSIX_SEM
-/*
- * Unlike Mach semaphores, POSIX semaphores take an absolute, real time as an
- * argument to sem_timedwait(). This routine converts from dispatch_time_t
- * but assumes the caller has already handled the possibility of
- * DISPATCH_TIME_FOREVER.
- */
-struct timespec
-_dispatch_timeout_ts(dispatch_time_t when)
-{
- struct timespec ts_realtime;
- uint64_t abstime, realtime;
- int ret;
-
- if (when == 0) {
- ret = clock_gettime(CLOCK_REALTIME, &ts_realtime);
- (void)dispatch_assume_zero(ret);
- return (ts_realtime);
- }
- if ((int64_t)when < 0) {
- ret = clock_gettime(CLOCK_REALTIME, &ts_realtime);
- (void)dispatch_assume_zero(ret);
- when = -(int64_t)when + ts_realtime.tv_sec * NSEC_PER_SEC +
- ts_realtime.tv_nsec;
- ts_realtime.tv_sec = when / NSEC_PER_SEC;
- ts_realtime.tv_nsec = when % NSEC_PER_SEC;
- return (ts_realtime);
- }
-
- /*
- * Rebase 'when': (when - abstime) + realtime.
- *
- * XXXRW: Should we cache this delta to avoid system calls?
- */
- abstime = _dispatch_absolute_time();
- ret = clock_gettime(CLOCK_REALTIME, &ts_realtime);
- (void)dispatch_assume_zero(ret);
- realtime = ts_realtime.tv_sec * NSEC_PER_SEC + ts_realtime.tv_nsec +
- (when - abstime);
- ts_realtime.tv_sec = realtime / NSEC_PER_SEC;
- ts_realtime.tv_nsec = realtime % NSEC_PER_SEC;
- return (ts_realtime);
-}
-#endif
diff --git a/src/trace.h b/src/trace.h
new file mode 100644
index 0000000..0d9bc3d
--- /dev/null
+++ b/src/trace.h
@@ -0,0 +1,152 @@
+/*
+ * Copyright (c) 2010-2011 Apple Inc. All rights reserved.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_START@
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_END@
+ */
+
+/*
+ * IMPORTANT: This header file describes INTERNAL interfaces to libdispatch
+ * which are subject to change in future releases of Mac OS X. Any applications
+ * relying on these interfaces WILL break.
+ */
+
+#ifndef __DISPATCH_TRACE__
+#define __DISPATCH_TRACE__
+
+#if DISPATCH_USE_DTRACE
+
+#include "provider.h"
+
+#define _dispatch_trace_callout(_c, _f, _dcc) do { \
+ if (slowpath(DISPATCH_CALLOUT_ENTRY_ENABLED()) || \
+ slowpath(DISPATCH_CALLOUT_RETURN_ENABLED())) { \
+ dispatch_queue_t _dq = _dispatch_queue_get_current(); \
+ char *_label = _dq ? _dq->dq_label : ""; \
+ dispatch_function_t _func = (dispatch_function_t)(_f); \
+ void *_ctxt = (_c); \
+ DISPATCH_CALLOUT_ENTRY(_dq, _label, _func, _ctxt); \
+ _dcc; \
+ DISPATCH_CALLOUT_RETURN(_dq, _label, _func, _ctxt); \
+ return; \
+ } \
+ return _dcc; \
+ } while (0)
+
+DISPATCH_ALWAYS_INLINE
+static inline void
+_dispatch_trace_client_callout(void *ctxt, dispatch_function_t f)
+{
+ _dispatch_trace_callout(ctxt, f == _dispatch_call_block_and_release &&
+ ctxt ? ((struct Block_basic *)ctxt)->Block_invoke : f,
+ _dispatch_client_callout(ctxt, f));
+}
+
+DISPATCH_ALWAYS_INLINE
+static inline void
+_dispatch_trace_client_callout2(void *ctxt, size_t i, void (*f)(void *, size_t))
+{
+ _dispatch_trace_callout(ctxt, f, _dispatch_client_callout2(ctxt, i, f));
+}
+
+#ifdef __BLOCKS__
+DISPATCH_ALWAYS_INLINE
+static inline void
+_dispatch_trace_client_callout_block(dispatch_block_t b)
+{
+ struct Block_basic *bb = (void*)b;
+ _dispatch_trace_callout(b, bb->Block_invoke,
+ _dispatch_client_callout(b, (dispatch_function_t)bb->Block_invoke));
+}
+#endif
+
+#define _dispatch_client_callout _dispatch_trace_client_callout
+#define _dispatch_client_callout2 _dispatch_trace_client_callout2
+#define _dispatch_client_callout_block _dispatch_trace_client_callout_block
+
+#define _dispatch_trace_continuation(_q, _o, _t) do { \
+ dispatch_queue_t _dq = (_q); \
+ char *_label = _dq ? _dq->dq_label : ""; \
+ struct dispatch_object_s *_do = (_o); \
+ char *_kind; \
+ dispatch_function_t _func; \
+ void *_ctxt; \
+ if (DISPATCH_OBJ_IS_VTABLE(_do)) { \
+ _ctxt = _do->do_ctxt; \
+ _kind = (char*)dx_kind(_do); \
+ if (dx_type(_do) == DISPATCH_SOURCE_KEVENT_TYPE && \
+ (_dq) != &_dispatch_mgr_q) { \
+ _func = ((dispatch_source_t)_do)->ds_refs->ds_handler_func; \
+ } else { \
+ _func = (dispatch_function_t)_dispatch_queue_invoke; \
+ } \
+ } else { \
+ struct dispatch_continuation_s *_dc = (void*)(_do); \
+ _ctxt = _dc->dc_ctxt; \
+ if ((long)_dc->do_vtable & DISPATCH_OBJ_SYNC_SLOW_BIT) { \
+ _kind = "semaphore"; \
+ _func = (dispatch_function_t)dispatch_semaphore_signal; \
+ } else if (_dc->dc_func == _dispatch_call_block_and_release) { \
+ _kind = "block"; \
+ _func = ((struct Block_basic *)_dc->dc_ctxt)->Block_invoke;\
+ } else { \
+ _kind = "function"; \
+ _func = _dc->dc_func; \
+ } \
+ } \
+ _t(_dq, _label, _do, _kind, _func, _ctxt); \
+ } while (0)
+
+DISPATCH_ALWAYS_INLINE
+static inline void
+_dispatch_trace_queue_push_list(dispatch_queue_t dq, dispatch_object_t _head,
+ dispatch_object_t _tail)
+{
+ if (slowpath(DISPATCH_QUEUE_PUSH_ENABLED())) {
+ struct dispatch_object_s *dou = _head._do;
+ do {
+ _dispatch_trace_continuation(dq, dou, DISPATCH_QUEUE_PUSH);
+ } while (dou != _tail._do && (dou = dou->do_next));
+ }
+ _dispatch_queue_push_list(dq, _head, _tail);
+}
+
+DISPATCH_ALWAYS_INLINE
+static inline void
+_dispatch_queue_push_notrace(dispatch_queue_t dq, dispatch_object_t dou)
+{
+ _dispatch_queue_push_list(dq, dou, dou);
+}
+
+#define _dispatch_queue_push_list _dispatch_trace_queue_push_list
+
+DISPATCH_ALWAYS_INLINE
+static inline void
+_dispatch_trace_continuation_pop(dispatch_queue_t dq,
+ dispatch_object_t dou)
+{
+ if (slowpath(DISPATCH_QUEUE_POP_ENABLED())) {
+ _dispatch_trace_continuation(dq, dou._do, DISPATCH_QUEUE_POP);
+ }
+}
+#else
+
+#define _dispatch_queue_push_notrace _dispatch_queue_push
+#define _dispatch_trace_continuation_pop(dq, dou)
+
+#endif // DISPATCH_USE_DTRACE
+
+#endif // __DISPATCH_TRACE__
diff --git a/tools/dispatch_trace.d b/tools/dispatch_trace.d
new file mode 100755
index 0000000..9059e4e
--- /dev/null
+++ b/tools/dispatch_trace.d
@@ -0,0 +1,76 @@
+#!/usr/sbin/dtrace -Z -s
+
+/*
+ * Copyright (c) 2010-2011 Apple Inc. All rights reserved.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_START@
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ * @APPLE_APACHE_LICENSE_HEADER_END@
+ */
+
+/*
+ * Usage: dispatch_dtrace.d -p [pid]
+ * traced process must have been executed with
+ * DYLD_IMAGE_SUFFIX=_profile or DYLD_IMAGE_SUFFIX=_debug
+ */
+
+#pragma D option quiet
+#pragma D option bufsize=16m
+
+BEGIN {
+ printf("%-8s %-3s %-8s %-35s%-15s%-?s %-43s%-?s %-14s%-?s %s\n",
+ "Time us", "CPU", "Thread", "Function", "Probe", "Queue", "Label",
+ "Item", "Kind", "Context", "Symbol");
+}
+
+dispatch$target:libdispatch_profile.dylib::queue-push,
+dispatch$target:libdispatch_debug.dylib::queue-push,
+dispatch$target:libdispatch_profile.dylib::queue-pop,
+dispatch$target:libdispatch_debug.dylib::queue-pop,
+dispatch$target:libdispatch_profile.dylib::callout-entry,
+dispatch$target:libdispatch_debug.dylib::callout-entry,
+dispatch$target:libdispatch_profile.dylib::callout-return,
+dispatch$target:libdispatch_debug.dylib::callout-return /!start/ {
+ start = walltimestamp;
+}
+
+/* probe queue-push/-pop(dispatch_queue_t queue, const char *label,
+ * dispatch_object_t item, const char *kind,
+ * dispatch_function_t function, void *context)
+ */
+dispatch$target:libdispatch_profile.dylib::queue-push,
+dispatch$target:libdispatch_debug.dylib::queue-push,
+dispatch$target:libdispatch_profile.dylib::queue-pop,
+dispatch$target:libdispatch_debug.dylib::queue-pop {
+ printf("%-8d %-3d 0x%08p %-35s%-15s0x%0?p %-43s0x%0?p %-14s0x%0?p",
+ (walltimestamp-start)/1000, cpu, tid, probefunc, probename, arg0,
+ copyinstr(arg1, 42), arg2, copyinstr(arg3, 13), arg5);
+ usym(arg4);
+ printf("\n");
+}
+
+/* probe callout-entry/-return(dispatch_queue_t queue, const char *label,
+ * dispatch_function_t function, void *context)
+ */
+dispatch$target:libdispatch_profile.dylib::callout-entry,
+dispatch$target:libdispatch_debug.dylib::callout-entry,
+dispatch$target:libdispatch_profile.dylib::callout-return,
+dispatch$target:libdispatch_debug.dylib::callout-return {
+ printf("%-8d %-3d 0x%08p %-35s%-15s0x%0?p %-43s%-?s %-14s0x%0?p",
+ (walltimestamp-start)/1000, cpu, tid, probefunc, probename, arg0,
+ copyinstr(arg1, 42), "", "", arg3);
+ usym(arg2);
+ printf("\n");
+}
diff --git a/xcodeconfig/libdispatch-resolved.xcconfig b/xcodeconfig/libdispatch-resolved.xcconfig
new file mode 100644
index 0000000..70e405f
--- /dev/null
+++ b/xcodeconfig/libdispatch-resolved.xcconfig
@@ -0,0 +1,25 @@
+//
+// Copyright (c) 2010-2011 Apple Inc. All rights reserved.
+//
+// @APPLE_APACHE_LICENSE_HEADER_START@
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+// @APPLE_APACHE_LICENSE_HEADER_END@
+//
+
+SUPPORTED_PLATFORMS = iphoneos
+PRODUCT_NAME = libdispatch_$(DISPATCH_RESOLVED_VARIANT)
+OTHER_LDFLAGS =
+SKIP_INSTALL = YES
+VERSIONING_SYSTEM =
diff --git a/xcodeconfig/libdispatch-resolver.xcconfig b/xcodeconfig/libdispatch-resolver.xcconfig
new file mode 100644
index 0000000..d8abe3d
--- /dev/null
+++ b/xcodeconfig/libdispatch-resolver.xcconfig
@@ -0,0 +1,20 @@
+//
+// Copyright (c) 2010-2011 Apple Inc. All rights reserved.
+//
+// @APPLE_APACHE_LICENSE_HEADER_START@
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+// @APPLE_APACHE_LICENSE_HEADER_END@
+//
+
diff --git a/xcodeconfig/libdispatch.xcconfig b/xcodeconfig/libdispatch.xcconfig
new file mode 100644
index 0000000..e7d44f4
--- /dev/null
+++ b/xcodeconfig/libdispatch.xcconfig
@@ -0,0 +1,67 @@
+//
+// Copyright (c) 2010-2011 Apple Inc. All rights reserved.
+//
+// @APPLE_APACHE_LICENSE_HEADER_START@
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+// @APPLE_APACHE_LICENSE_HEADER_END@
+//
+
+#include "<DEVELOPER_DIR>/Makefiles/CoreOS/Xcode/BSD.xcconfig"
+SUPPORTED_PLATFORMS = macosx iphoneos iphonesimulator
+ARCHS[sdk=iphonesimulator*] = $(NATIVE_ARCH_32_BIT) // Override BSD.xcconfig ARCHS <rdar://problem/9303721>
+PRODUCT_NAME = libdispatch
+PRODUCT_NAME[sdk=iphonesimulator*] = libdispatch_sim
+EXECUTABLE_PREFIX =
+LD_DYLIB_INSTALL_NAME = /usr/lib/system/$(EXECUTABLE_NAME)
+INSTALL_PATH = /usr/lib/system
+INSTALL_PATH[sdk=iphonesimulator*] = $(SDKROOT)/usr/lib/system
+PUBLIC_HEADERS_FOLDER_PATH = /usr/include/dispatch
+PUBLIC_HEADERS_FOLDER_PATH[sdk=iphonesimulator*] = $(SDKROOT)/usr/include/dispatch
+PRIVATE_HEADERS_FOLDER_PATH = /usr/local/include/dispatch
+PRIVATE_HEADERS_FOLDER_PATH[sdk=iphonesimulator*] = $(SDKROOT)/usr/local/include/dispatch
+HEADER_SEARCH_PATHS = $(SDKROOT)/System/Library/Frameworks/System.framework/PrivateHeaders $(PROJECT_DIR)
+INSTALLHDRS_SCRIPT_PHASE = YES
+ALWAYS_SEARCH_USER_PATHS = NO
+BUILD_VARIANTS = normal debug profile
+ONLY_ACTIVE_ARCH = NO
+GCC_VERSION = com.apple.compilers.llvm.clang.1_0
+GCC_STRICT_ALIASING = YES
+GCC_SYMBOLS_PRIVATE_EXTERN = YES
+GCC_CW_ASM_SYNTAX = NO
+GCC_ENABLE_CPP_EXCEPTIONS = NO
+GCC_ENABLE_CPP_RTTI = NO
+GCC_ENABLE_OBJC_EXCEPTIONS = NO
+GCC_ENABLE_PASCAL_STRINGS = NO
+GCC_WARN_SHADOW = YES
+GCC_WARN_64_TO_32_BIT_CONVERSION = YES
+GCC_WARN_ABOUT_RETURN_TYPE = YES
+GCC_WARN_ABOUT_MISSING_PROTOTYPES = YES
+GCC_WARN_ABOUT_MISSING_NEWLINE = YES
+GCC_WARN_UNUSED_VARIABLE = YES
+GCC_TREAT_WARNINGS_AS_ERRORS = YES
+GCC_OPTIMIZATION_LEVEL = s
+GCC_THUMB_SUPPORT[arch=armv6] = NO
+GCC_PREPROCESSOR_DEFINITIONS = __DARWIN_NON_CANCELABLE=1
+GCC_PREPROCESSOR_DEFINITIONS[sdk=iphonesimulator*] = $(GCC_PREPROCESSOR_DEFINITIONS) USE_LIBDISPATCH_INIT_CONSTRUCTOR=1 DISPATCH_USE_PTHREAD_ATFORK=1 DISPATCH_USE_DIRECT_TSD=0
+WARNING_CFLAGS = -Wall -Wextra -Waggregate-return -Wfloat-equal -Wpacked -Wmissing-declarations -Wstrict-overflow=4 -Wstrict-aliasing=2
+OTHER_CFLAGS = -fno-unwind-tables -fno-asynchronous-unwind-tables -fno-exceptions -fdiagnostics-show-option -fverbose-asm -momit-leaf-frame-pointer
+OTHER_CFLAGS_debug = -fstack-protector -fno-inline -O0 -DDISPATCH_DEBUG=1
+OTHER_CFLAGS_profile = -DDISPATCH_PROFILE=1
+GENERATE_PROFILING_CODE = NO
+GENERATE_MASTER_OBJECT_FILE = NO
+DYLIB_CURRENT_VERSION = $(CURRENT_PROJECT_VERSION)
+UMBRELLA_LDFLAGS = -umbrella System
+UMBRELLA_LDFLAGS[sdk=iphonesimulator*] =
+OTHER_LDFLAGS = $(OTHER_LDFLAGS) $(UMBRELLA_LDFLAGS) $(CR_LDFLAGS)
diff --git a/xcodescripts/install-manpages.sh b/xcodescripts/install-manpages.sh
new file mode 100755
index 0000000..2d88a26
--- /dev/null
+++ b/xcodescripts/install-manpages.sh
@@ -0,0 +1,107 @@
+#!/bin/bash -e
+#
+# Copyright (c) 2010-2011 Apple Inc. All rights reserved.
+#
+# @APPLE_APACHE_LICENSE_HEADER_START@
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# @APPLE_APACHE_LICENSE_HEADER_END@
+#
+
+if [ "$ACTION" = installhdrs ]; then exit 0; fi
+if [ "${RC_ProjectName%_Sim}" != "${RC_ProjectName}" ]; then exit 0; fi
+
+mkdir -p "$DSTROOT"/usr/share/man/man3 || true
+mkdir -p "$DSTROOT"/usr/local/share/man/man3 || true
+
+# Copy man pages
+cd "$SRCROOT"/man
+BASE_PAGES="dispatch.3 dispatch_after.3 dispatch_api.3 dispatch_apply.3 \
+ dispatch_async.3 dispatch_group_create.3 dispatch_object.3 \
+ dispatch_once.3 dispatch_queue_create.3 dispatch_semaphore_create.3 \
+ dispatch_source_create.3 dispatch_time.3 dispatch_data_create.3 \
+ dispatch_io_create.3 dispatch_io_read.3 dispatch_read.3"
+
+PRIVATE_PAGES="dispatch_benchmark.3"
+
+cp ${BASE_PAGES} "$DSTROOT"/usr/share/man/man3
+cp ${PRIVATE_PAGES} "$DSTROOT"/usr/local/share/man/man3
+
+# Make hard links (lots of hard links)
+
+cd "$DSTROOT"/usr/local/share/man/man3
+ln -f dispatch_benchmark.3 dispatch_benchmark_f.3
+chown ${INSTALL_OWNER}:${INSTALL_GROUP} $PRIVATE_PAGES
+chmod $INSTALL_MODE_FLAG $PRIVATE_PAGES
+
+cd $DSTROOT/usr/share/man/man3
+
+chown ${INSTALL_OWNER}:${INSTALL_GROUP} $BASE_PAGES
+chmod $INSTALL_MODE_FLAG $BASE_PAGES
+
+ln -f dispatch_after.3 dispatch_after_f.3
+ln -f dispatch_apply.3 dispatch_apply_f.3
+ln -f dispatch_once.3 dispatch_once_f.3
+
+for m in dispatch_async_f dispatch_sync dispatch_sync_f; do
+ ln -f dispatch_async.3 ${m}.3
+done
+
+for m in dispatch_group_enter dispatch_group_leave dispatch_group_wait \
+ dispatch_group_async dispatch_group_async_f dispatch_group_notify \
+ dispatch_group_notify_f; do
+ ln -f dispatch_group_create.3 ${m}.3
+done
+
+for m in dispatch_retain dispatch_release dispatch_suspend dispatch_resume \
+ dispatch_get_context dispatch_set_context dispatch_set_finalizer_f; do
+ ln -f dispatch_object.3 ${m}.3
+done
+
+for m in dispatch_semaphore_signal dispatch_semaphore_wait; do
+ ln -f dispatch_semaphore_create.3 ${m}.3
+done
+
+for m in dispatch_get_current_queue dispatch_main dispatch_get_main_queue \
+ dispatch_get_global_queue dispatch_queue_get_label \
+ dispatch_set_target_queue; do
+ ln -f dispatch_queue_create.3 ${m}.3
+done
+
+for m in dispatch_source_set_event_handler dispatch_source_set_event_handler_f \
+ dispatch_source_set_cancel_handler dispatch_source_set_cancel_handler_f \
+ dispatch_source_cancel dispatch_source_testcancel \
+ dispatch_source_get_handle dispatch_source_get_mask \
+ dispatch_source_get_data dispatch_source_merge_data \
+ dispatch_source_set_timer; do
+ ln -f dispatch_source_create.3 ${m}.3
+done
+
+ln -f dispatch_time.3 dispatch_walltime.3
+
+for m in dispatch_data_create_concat dispatch_data_create_subrange \
+ dispatch_data_create_map dispatch_data_apply \
+ dispatch_data_copy_region dispatch_data_get_size; do
+ ln -f dispatch_data_create.3 ${m}.3
+done
+
+for m in dispatch_io_create_with_path dispatch_io_set_high_water \
+ dispatch_io_set_low_water dispatch_io_set_interval \
+ dispatch_io_close; do
+ ln -f dispatch_io_create.3 ${m}.3
+done
+
+ln -f dispatch_io_read.3 dispatch_io_write.3
+
+ln -f dispatch_read.3 dispatch_write.3
diff --git a/xcodescripts/mig-headers.sh b/xcodescripts/mig-headers.sh
new file mode 100755
index 0000000..3669ec2
--- /dev/null
+++ b/xcodescripts/mig-headers.sh
@@ -0,0 +1,29 @@
+#!/bin/bash -e
+#
+# Copyright (c) 2010-2011 Apple Inc. All rights reserved.
+#
+# @APPLE_APACHE_LICENSE_HEADER_START@
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# @APPLE_APACHE_LICENSE_HEADER_END@
+#
+
+export MIGCC="$(xcrun -find cc)"
+export MIGCOM="$(xcrun -find migcom)"
+export PATH="${PLATFORM_DEVELOPER_BIN_DIR}:${DEVELOPER_BIN_DIR}:${PATH}"
+for a in ${ARCHS}; do
+ xcrun mig -arch $a -header "${SCRIPT_OUTPUT_FILE_0}" \
+ -sheader "${SCRIPT_OUTPUT_FILE_1}" -user /dev/null \
+ -server /dev/null "${SCRIPT_INPUT_FILE_0}"
+done
diff --git a/xcodescripts/postprocess-headers.sh b/xcodescripts/postprocess-headers.sh
new file mode 100755
index 0000000..41f4669
--- /dev/null
+++ b/xcodescripts/postprocess-headers.sh
@@ -0,0 +1,21 @@
+#!/bin/bash -e
+#
+# Copyright (c) 2010-2011 Apple Inc. All rights reserved.
+#
+# @APPLE_APACHE_LICENSE_HEADER_START@
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# @APPLE_APACHE_LICENSE_HEADER_END@
+#
+
diff --git a/xcodescripts/symlink-headers.sh b/xcodescripts/symlink-headers.sh
new file mode 100755
index 0000000..a062a6f
--- /dev/null
+++ b/xcodescripts/symlink-headers.sh
@@ -0,0 +1,29 @@
+#!/bin/bash -e
+#
+# Copyright (c) 2010-2011 Apple Inc. All rights reserved.
+#
+# @APPLE_APACHE_LICENSE_HEADER_START@
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# @APPLE_APACHE_LICENSE_HEADER_END@
+#
+
+if [ "$DEPLOYMENT_LOCATION" != YES ]; then
+ DSTROOT="$CONFIGURATION_BUILD_DIR"
+ [ -L "$DSTROOT$PRIVATE_HEADERS_FOLDER_PATH"/private.h ] && exit
+fi
+
+mv "$DSTROOT$PRIVATE_HEADERS_FOLDER_PATH"/private.h \
+ "$DSTROOT$PRIVATE_HEADERS_FOLDER_PATH"/dispatch.h
+ln -sf dispatch.h "$DSTROOT$PRIVATE_HEADERS_FOLDER_PATH"/private.h