diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2015-04-14 14:37:47 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2015-04-14 14:37:47 -0700 |
commit | 6c8a53c9e6a151fffb07f8b4c34bd1e33dddd467 (patch) | |
tree | 791caf826ef136c521a97b7878f226b6ba1c1d75 /tools/perf/util/machine.c | |
parent | e95e7f627062be5e6ce971ce873e6234c91ffc50 (diff) | |
parent | 066450be419fa48007a9f29e19828f2a86198754 (diff) | |
download | linux-6c8a53c9e6a151fffb07f8b4c34bd1e33dddd467.tar.gz |
Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf changes from Ingo Molnar:
"Core kernel changes:
- One of the more interesting features in this cycle is the ability
to attach eBPF programs (user-defined, sandboxed bytecode executed
by the kernel) to kprobes.
This allows user-defined instrumentation on a live kernel image
that can never crash, hang or interfere with the kernel negatively.
(Right now it's limited to root-only, but in the future we might
allow unprivileged use as well.)
(Alexei Starovoitov)
- Another non-trivial feature is per event clockid support: this
allows, amongst other things, the selection of different clock
sources for event timestamps traced via perf.
This feature is sought by people who'd like to merge perf generated
events with external events that were measured with different
clocks:
- cluster wide profiling
- for system wide tracing with user-space events,
- JIT profiling events
etc. Matching perf tooling support is added as well, available via
the -k, --clockid <clockid> parameter to perf record et al.
(Peter Zijlstra)
Hardware enablement kernel changes:
- x86 Intel Processor Trace (PT) support: which is a hardware tracer
on steroids, available on Broadwell CPUs.
The hardware trace stream is directly output into the user-space
ring-buffer, using the 'AUX' data format extension that was added
to the perf core to support hardware constraints such as the
necessity to have the tracing buffer physically contiguous.
This patch-set was developed for two years and this is the result.
A simple way to make use of this is to use BTS tracing, the PT
driver emulates BTS output - available via the 'intel_bts' PMU.
More explicit PT specific tooling support is in the works as well -
will probably be ready by 4.2.
(Alexander Shishkin, Peter Zijlstra)
- x86 Intel Cache QoS Monitoring (CQM) support: this is a hardware
feature of Intel Xeon CPUs that allows the measurement and
allocation/partitioning of caches to individual workloads.
These kernel changes expose the measurement side as a new PMU
driver, which exposes various QoS related PMU events. (The
partitioning change is work in progress and is planned to be merged
as a cgroup extension.)
(Matt Fleming, Peter Zijlstra; CPU feature detection by Peter P
Waskiewicz Jr)
- x86 Intel Haswell LBR call stack support: this is a new Haswell
feature that allows the hardware recording of call chains, plus
tooling support. To activate this feature you have to enable it
via the new 'lbr' call-graph recording option:
perf record --call-graph lbr
perf report
or:
perf top --call-graph lbr
This hardware feature is a lot faster than stack walk or dwarf
based unwinding, but has some limitations:
- It reuses the current LBR facility, so LBR call stack and
branch record can not be enabled at the same time.
- It is only available for user-space callchains.
(Yan, Zheng)
- x86 Intel Broadwell CPU support and various event constraints and
event table fixes for earlier models.
(Andi Kleen)
- x86 Intel HT CPUs event scheduling workarounds. This is a complex
CPU bug affecting the SNB,IVB,HSW families that results in counter
value corruption. The mitigation code is automatically enabled and
is transparent.
(Maria Dimakopoulou, Stephane Eranian)
The perf tooling side had a ton of changes in this cycle as well, so
I'm only able to list the user visible changes here, in addition to
the tooling changes outlined above:
User visible changes affecting all tools:
- Improve support of compressed kernel modules (Jiri Olsa)
- Save DSO loading errno to better report errors (Arnaldo Carvalho de Melo)
- Bash completion for subcommands (Yunlong Song)
- Add 'I' event modifier for perf_event_attr.exclude_idle bit (Jiri Olsa)
- Support missing -f to override perf.data file ownership. (Yunlong Song)
- Show the first event with an invalid filter (David Ahern, Arnaldo Carvalho de Melo)
User visible changes in individual tools:
'perf data':
New tool for converting perf.data to other formats, initially
for the CTF (Common Trace Format) from LTTng (Jiri Olsa,
Sebastian Siewior)
'perf diff':
Add --kallsyms option (David Ahern)
'perf list':
Allow listing events with 'tracepoint' prefix (Yunlong Song)
Sort the output of the command (Yunlong Song)
'perf kmem':
Respect -i option (Jiri Olsa)
Print big numbers using thousands' group (Namhyung Kim)
Allow -v option (Namhyung Kim)
Fix alignment of slab result table (Namhyung Kim)
'perf probe':
Support multiple probes on different binaries on the same command line (Masami Hiramatsu)
Support unnamed union/structure members data collection. (Masami Hiramatsu)
Check kprobes blacklist when adding new events. (Masami Hiramatsu)
'perf record':
Teach 'perf record' about perf_event_attr.clockid (Peter Zijlstra)
Support recording running/enabled time (Andi Kleen)
'perf sched':
Improve the performance of 'perf sched replay' on high CPU core count machines (Yunlong Song)
'perf report' and 'perf top':
Allow annotating entries in callchains in the hists browser (Arnaldo Carvalho de Melo)
Indicate which callchain entries are annotated in the
TUI hists browser (Arnaldo Carvalho de Melo)
Add pid/tid filtering to 'report' and 'script' commands (David Ahern)
Consider PERF_RECORD_ events with cpumode == 0 in 'perf top', removing one
cause of long term memory usage buildup, i.e. not processing PERF_RECORD_EXIT
events (Arnaldo Carvalho de Melo)
'perf stat':
Report unsupported events properly (Suzuki K. Poulose)
Output running time and run/enabled ratio in CSV mode (Andi Kleen)
'perf trace':
Handle legacy syscalls tracepoints (David Ahern, Arnaldo Carvalho de Melo)
Only insert blank duration bracket when tracing syscalls (Arnaldo Carvalho de Melo)
Filter out the trace pid when no threads are specified (Arnaldo Carvalho de Melo)
Dump stack on segfaults (Arnaldo Carvalho de Melo)
No need to explicitely enable evsels for workload started from perf, let it
be enabled via perf_event_attr.enable_on_exec, removing some events that take
place in the 'perf trace' before a workload is really started by it.
(Arnaldo Carvalho de Melo)
Allow mixing with tracepoints and suppressing plain syscalls. (Arnaldo Carvalho de Melo)
There's also been a ton of infrastructure work done, such as the
split-out of perf's build system into tools/build/ and other changes -
see the shortlog and changelog for details"
* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (358 commits)
perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init()
perf evlist: Fix type for references to data_head/tail
perf probe: Check the orphaned -x option
perf probe: Support multiple probes on different binaries
perf buildid-list: Fix segfault when show DSOs with hits
perf tools: Fix cross-endian analysis
perf tools: Fix error path to do closedir() when synthesizing threads
perf tools: Fix synthesizing fork_event.ppid for non-main thread
perf tools: Add 'I' event modifier for exclude_idle bit
perf report: Don't call map__kmap if map is NULL.
perf tests: Fix attr tests
perf probe: Fix ARM 32 building error
perf tools: Merge all perf_event_attr print functions
perf record: Add clockid parameter
perf sched replay: Use replay_repeat to calculate the runavg of cpu usage instead of the default value 10
perf sched replay: Support using -f to override perf.data file ownership
perf sched replay: Fix the EMFILE error caused by the limitation of the maximum open files
perf sched replay: Handle the dead halt of sem_wait when create_tasks() fails for any task
perf sched replay: Fix the segmentation fault problem caused by pr_err in threads
perf sched replay: Realloc the memory of pid_to_task stepwise to adapt to the different pid_max configurations
...
Diffstat (limited to 'tools/perf/util/machine.c')
-rw-r--r-- | tools/perf/util/machine.c | 332 |
1 files changed, 217 insertions, 115 deletions
diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c index 1bca3a9f2b16..527e032e24f6 100644 --- a/tools/perf/util/machine.c +++ b/tools/perf/util/machine.c @@ -89,16 +89,6 @@ static void dsos__delete(struct dsos *dsos) } } -void machine__delete_dead_threads(struct machine *machine) -{ - struct thread *n, *t; - - list_for_each_entry_safe(t, n, &machine->dead_threads, node) { - list_del(&t->node); - thread__delete(t); - } -} - void machine__delete_threads(struct machine *machine) { struct rb_node *nd = rb_first(&machine->threads); @@ -106,9 +96,8 @@ void machine__delete_threads(struct machine *machine) while (nd) { struct thread *t = rb_entry(nd, struct thread, rb_node); - rb_erase(&t->rb_node, &machine->threads); nd = rb_next(nd); - thread__delete(t); + machine__remove_thread(machine, t); } } @@ -361,9 +350,13 @@ static struct thread *__machine__findnew_thread(struct machine *machine, * the full rbtree: */ th = machine->last_match; - if (th && th->tid == tid) { - machine__update_thread_pid(machine, th, pid); - return th; + if (th != NULL) { + if (th->tid == tid) { + machine__update_thread_pid(machine, th, pid); + return th; + } + + thread__zput(machine->last_match); } while (*p != NULL) { @@ -371,7 +364,7 @@ static struct thread *__machine__findnew_thread(struct machine *machine, th = rb_entry(parent, struct thread, rb_node); if (th->tid == tid) { - machine->last_match = th; + machine->last_match = thread__get(th); machine__update_thread_pid(machine, th, pid); return th; } @@ -403,8 +396,11 @@ static struct thread *__machine__findnew_thread(struct machine *machine, thread__delete(th); return NULL; } - - machine->last_match = th; + /* + * It is now in the rbtree, get a ref + */ + thread__get(th); + machine->last_match = thread__get(th); } return th; @@ -462,30 +458,61 @@ int machine__process_lost_event(struct machine *machine __maybe_unused, return 0; } +static struct dso* +machine__module_dso(struct machine *machine, struct kmod_path *m, + const char *filename) +{ + struct dso *dso; + + dso = dsos__find(&machine->kernel_dsos, m->name, true); + if (!dso) { + dso = dsos__addnew(&machine->kernel_dsos, m->name); + if (dso == NULL) + return NULL; + + if (machine__is_host(machine)) + dso->symtab_type = DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE; + else + dso->symtab_type = DSO_BINARY_TYPE__GUEST_KMODULE; + + /* _KMODULE_COMP should be next to _KMODULE */ + if (m->kmod && m->comp) + dso->symtab_type++; + + dso__set_short_name(dso, strdup(m->name), true); + dso__set_long_name(dso, strdup(filename), true); + } + + return dso; +} + struct map *machine__new_module(struct machine *machine, u64 start, const char *filename) { - struct map *map; - struct dso *dso = __dsos__findnew(&machine->kernel_dsos, filename); - bool compressed; + struct map *map = NULL; + struct dso *dso; + struct kmod_path m; - if (dso == NULL) + if (kmod_path__parse_name(&m, filename)) return NULL; - map = map__new2(start, dso, MAP__FUNCTION); - if (map == NULL) - return NULL; + map = map_groups__find_by_name(&machine->kmaps, MAP__FUNCTION, + m.name); + if (map) + goto out; - if (machine__is_host(machine)) - dso->symtab_type = DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE; - else - dso->symtab_type = DSO_BINARY_TYPE__GUEST_KMODULE; + dso = machine__module_dso(machine, &m, filename); + if (dso == NULL) + goto out; - /* _KMODULE_COMP should be next to _KMODULE */ - if (is_kernel_module(filename, &compressed) && compressed) - dso->symtab_type++; + map = map__new2(start, dso, MAP__FUNCTION); + if (map == NULL) + goto out; map_groups__insert(&machine->kmaps, map); + +out: + free(m.name); return map; } @@ -650,6 +677,9 @@ int __machine__create_kernel_maps(struct machine *machine, struct dso *kernel) machine->vmlinux_maps[type]->unmap_ip = identity__map_ip; kmap = map__kmap(machine->vmlinux_maps[type]); + if (!kmap) + return -1; + kmap->kmaps = &machine->kmaps; map_groups__insert(&machine->kmaps, machine->vmlinux_maps[type]); @@ -671,7 +701,7 @@ void machine__destroy_kernel_maps(struct machine *machine) kmap = map__kmap(machine->vmlinux_maps[type]); map_groups__remove(&machine->kmaps, machine->vmlinux_maps[type]); - if (kmap->ref_reloc_sym) { + if (kmap && kmap->ref_reloc_sym) { /* * ref_reloc_sym is shared among all maps, so free just * on one of them. @@ -827,6 +857,39 @@ static char *get_kernel_version(const char *root_dir) return strdup(name); } +static bool is_kmod_dso(struct dso *dso) +{ + return dso->symtab_type == DSO_BINARY_TYPE__SYSTEM_PATH_KMODULE || + dso->symtab_type == DSO_BINARY_TYPE__GUEST_KMODULE; +} + +static int map_groups__set_module_path(struct map_groups *mg, const char *path, + struct kmod_path *m) +{ + struct map *map; + char *long_name; + + map = map_groups__find_by_name(mg, MAP__FUNCTION, m->name); + if (map == NULL) + return 0; + + long_name = strdup(path); + if (long_name == NULL) + return -ENOMEM; + + dso__set_long_name(map->dso, long_name, true); + dso__kernel_module_get_build_id(map->dso, ""); + + /* + * Full name could reveal us kmod compression, so + * we need to update the symtab_type if needed. + */ + if (m->comp && is_kmod_dso(map->dso)) + map->dso->symtab_type++; + + return 0; +} + static int map_groups__set_modules_path_dir(struct map_groups *mg, const char *dir_name, int depth) { @@ -865,35 +928,19 @@ static int map_groups__set_modules_path_dir(struct map_groups *mg, if (ret < 0) goto out; } else { - char *dot = strrchr(dent->d_name, '.'), - dso_name[PATH_MAX]; - struct map *map; - char *long_name; + struct kmod_path m; - if (dot == NULL) - continue; - - /* On some system, modules are compressed like .ko.gz */ - if (is_supported_compression(dot + 1) && - is_kmodule_extension(dot - 2)) - dot -= 3; + ret = kmod_path__parse_name(&m, dent->d_name); + if (ret) + goto out; - snprintf(dso_name, sizeof(dso_name), "[%.*s]", - (int)(dot - dent->d_name), dent->d_name); + if (m.kmod) + ret = map_groups__set_module_path(mg, path, &m); - strxfrchar(dso_name, '-', '_'); - map = map_groups__find_by_name(mg, MAP__FUNCTION, - dso_name); - if (map == NULL) - continue; + free(m.name); - long_name = strdup(path); - if (long_name == NULL) { - ret = -1; + if (ret) goto out; - } - dso__set_long_name(map->dso, long_name, true); - dso__kernel_module_get_build_id(map->dso, ""); } } @@ -1046,40 +1093,11 @@ static int machine__process_kernel_mmap_event(struct machine *machine, strlen(kmmap_prefix) - 1) == 0; if (event->mmap.filename[0] == '/' || (!is_kernel_mmap && event->mmap.filename[0] == '[')) { - - char short_module_name[1024]; - char *name, *dot; - - if (event->mmap.filename[0] == '/') { - name = strrchr(event->mmap.filename, '/'); - if (name == NULL) - goto out_problem; - - ++name; /* skip / */ - dot = strrchr(name, '.'); - if (dot == NULL) - goto out_problem; - /* On some system, modules are compressed like .ko.gz */ - if (is_supported_compression(dot + 1)) - dot -= 3; - if (!is_kmodule_extension(dot + 1)) - goto out_problem; - snprintf(short_module_name, sizeof(short_module_name), - "[%.*s]", (int)(dot - name), name); - strxfrchar(short_module_name, '-', '_'); - } else - strcpy(short_module_name, event->mmap.filename); - map = machine__new_module(machine, event->mmap.start, event->mmap.filename); if (map == NULL) goto out_problem; - name = strdup(short_module_name); - if (name == NULL) - goto out_problem; - - dso__set_short_name(map->dso, name, true); map->end = map->start + event->mmap.len; } else if (is_kernel_mmap) { const char *symbol_name = (event->mmap.filename + @@ -1092,7 +1110,7 @@ static int machine__process_kernel_mmap_event(struct machine *machine, struct dso *dso; list_for_each_entry(dso, &machine->kernel_dsos.head, node) { - if (is_kernel_module(dso->long_name, NULL)) + if (is_kernel_module(dso->long_name)) continue; kernel = dso; @@ -1236,15 +1254,19 @@ out_problem: return 0; } -static void machine__remove_thread(struct machine *machine, struct thread *th) +void machine__remove_thread(struct machine *machine, struct thread *th) { - machine->last_match = NULL; + if (machine->last_match == th) + thread__zput(machine->last_match); + rb_erase(&th->rb_node, &machine->threads); /* - * We may have references to this thread, for instance in some hist_entry - * instances, so just move them to a separate list. + * Move it first to the dead_threads list, then drop the reference, + * if this is the last reference, then the thread__delete destructor + * will be called and we will remove it from the dead_threads list. */ list_add_tail(&th->node, &machine->dead_threads); + thread__put(th); } int machine__process_fork_event(struct machine *machine, union perf_event *event, @@ -1387,29 +1409,27 @@ struct mem_info *sample__resolve_mem(struct perf_sample *sample, static int add_callchain_ip(struct thread *thread, struct symbol **parent, struct addr_location *root_al, - bool branch_history, + u8 *cpumode, u64 ip) { struct addr_location al; al.filtered = 0; al.sym = NULL; - if (branch_history) + if (!cpumode) { thread__find_cpumode_addr_location(thread, MAP__FUNCTION, ip, &al); - else { - u8 cpumode = PERF_RECORD_MISC_USER; - + } else { if (ip >= PERF_CONTEXT_MAX) { switch (ip) { case PERF_CONTEXT_HV: - cpumode = PERF_RECORD_MISC_HYPERVISOR; + *cpumode = PERF_RECORD_MISC_HYPERVISOR; break; case PERF_CONTEXT_KERNEL: - cpumode = PERF_RECORD_MISC_KERNEL; + *cpumode = PERF_RECORD_MISC_KERNEL; break; case PERF_CONTEXT_USER: - cpumode = PERF_RECORD_MISC_USER; + *cpumode = PERF_RECORD_MISC_USER; break; default: pr_debug("invalid callchain context: " @@ -1423,8 +1443,8 @@ static int add_callchain_ip(struct thread *thread, } return 0; } - thread__find_addr_location(thread, cpumode, MAP__FUNCTION, - ip, &al); + thread__find_addr_location(thread, *cpumode, MAP__FUNCTION, + ip, &al); } if (al.sym != NULL) { @@ -1502,18 +1522,102 @@ static int remove_loops(struct branch_entry *l, int nr) return nr; } -static int thread__resolve_callchain_sample(struct thread *thread, - struct ip_callchain *chain, - struct branch_stack *branch, - struct symbol **parent, - struct addr_location *root_al, - int max_stack) +/* + * Recolve LBR callstack chain sample + * Return: + * 1 on success get LBR callchain information + * 0 no available LBR callchain information, should try fp + * negative error code on other errors. + */ +static int resolve_lbr_callchain_sample(struct thread *thread, + struct perf_sample *sample, + struct symbol **parent, + struct addr_location *root_al, + int max_stack) { + struct ip_callchain *chain = sample->callchain; int chain_nr = min(max_stack, (int)chain->nr); + u8 cpumode = PERF_RECORD_MISC_USER; + int i, j, err; + u64 ip; + + for (i = 0; i < chain_nr; i++) { + if (chain->ips[i] == PERF_CONTEXT_USER) + break; + } + + /* LBR only affects the user callchain */ + if (i != chain_nr) { + struct branch_stack *lbr_stack = sample->branch_stack; + int lbr_nr = lbr_stack->nr; + /* + * LBR callstack can only get user call chain. + * The mix_chain_nr is kernel call chain + * number plus LBR user call chain number. + * i is kernel call chain number, + * 1 is PERF_CONTEXT_USER, + * lbr_nr + 1 is the user call chain number. + * For details, please refer to the comments + * in callchain__printf + */ + int mix_chain_nr = i + 1 + lbr_nr + 1; + + if (mix_chain_nr > PERF_MAX_STACK_DEPTH + PERF_MAX_BRANCH_DEPTH) { + pr_warning("corrupted callchain. skipping...\n"); + return 0; + } + + for (j = 0; j < mix_chain_nr; j++) { + if (callchain_param.order == ORDER_CALLEE) { + if (j < i + 1) + ip = chain->ips[j]; + else if (j > i + 1) + ip = lbr_stack->entries[j - i - 2].from; + else + ip = lbr_stack->entries[0].to; + } else { + if (j < lbr_nr) + ip = lbr_stack->entries[lbr_nr - j - 1].from; + else if (j > lbr_nr) + ip = chain->ips[i + 1 - (j - lbr_nr)]; + else + ip = lbr_stack->entries[0].to; + } + + err = add_callchain_ip(thread, parent, root_al, &cpumode, ip); + if (err) + return (err < 0) ? err : 0; + } + return 1; + } + + return 0; +} + +static int thread__resolve_callchain_sample(struct thread *thread, + struct perf_evsel *evsel, + struct perf_sample *sample, + struct symbol **parent, + struct addr_location *root_al, + int max_stack) +{ + struct branch_stack *branch = sample->branch_stack; + struct ip_callchain *chain = sample->callchain; + int chain_nr = min(max_stack, (int)chain->nr); + u8 cpumode = PERF_RECORD_MISC_USER; int i, j, err; int skip_idx = -1; int first_call = 0; + callchain_cursor_reset(&callchain_cursor); + + if (has_branch_callstack(evsel)) { + err = resolve_lbr_callchain_sample(thread, sample, parent, + root_al, max_stack); + if (err) + return (err < 0) ? err : 0; + } + /* * Based on DWARF debug information, some architectures skip * a callchain entry saved by the kernel. @@ -1521,8 +1625,6 @@ static int thread__resolve_callchain_sample(struct thread *thread, if (chain->nr < PERF_MAX_STACK_DEPTH) skip_idx = arch_skip_callchain_idx(thread, chain); - callchain_cursor_reset(&callchain_cursor); - /* * Add branches to call stack for easier browsing. This gives * more context for a sample than just the callers. @@ -1568,10 +1670,10 @@ static int thread__resolve_callchain_sample(struct thread *thread, for (i = 0; i < nr; i++) { err = add_callchain_ip(thread, parent, root_al, - true, be[i].to); + NULL, be[i].to); if (!err) err = add_callchain_ip(thread, parent, root_al, - true, be[i].from); + NULL, be[i].from); if (err == -EINVAL) break; if (err) @@ -1600,7 +1702,7 @@ check_calls: #endif ip = chain->ips[j]; - err = add_callchain_ip(thread, parent, root_al, false, ip); + err = add_callchain_ip(thread, parent, root_al, &cpumode, ip); if (err) return (err < 0) ? err : 0; @@ -1623,9 +1725,9 @@ int thread__resolve_callchain(struct thread *thread, struct addr_location *root_al, int max_stack) { - int ret = thread__resolve_callchain_sample(thread, sample->callchain, - sample->branch_stack, - parent, root_al, max_stack); + int ret = thread__resolve_callchain_sample(thread, evsel, + sample, parent, + root_al, max_stack); if (ret) return ret; |