GDB Internals

Table of Contents

Next: , Up: (dir)

Scope of this Document

This document documents the internals of the GNU debugger, gdb. It includes description of gdb's key algorithms and operations, as well as the mechanisms that adapt gdb to specific hosts and targets.

Next: , Previous: Top, Up: Top

1 Requirements

Before diving into the internals, you should understand the formal requirements and other expectations for gdb. Although some of these may seem obvious, there have been proposals for gdb that have run counter to these requirements.

First of all, gdb is a debugger. It's not designed to be a front panel for embedded systems. It's not a text editor. It's not a shell. It's not a programming environment.

gdb is an interactive tool. Although a batch mode is available, gdb's primary role is to interact with a human programmer.

gdb should be responsive to the user. A programmer hot on the trail of a nasty bug, and operating under a looming deadline, is going to be very impatient of everything, including the response time to debugger commands.

gdb should be relatively permissive, such as for expressions. While the compiler should be picky (or have the option to be made picky), since source code lives for a long time usually, the programmer doing debugging shouldn't be spending time figuring out to mollify the debugger.

gdb will be called upon to deal with really large programs. Executable sizes of 50 to 100 megabytes occur regularly, and we've heard reports of programs approaching 1 gigabyte in size.

gdb should be able to run everywhere. No other debugger is available for even half as many configurations as gdb supports.

Next: , Previous: Requirements, Up: Top

2 Overall Structure

gdb consists of three major subsystems: user interface, symbol handling (the symbol side), and target system handling (the target side).

The user interface consists of several actual interfaces, plus supporting code.

The symbol side consists of object file readers, debugging info interpreters, symbol table management, source language expression parsing, type and value printing.

The target side consists of execution control, stack frame analysis, and physical target manipulation.

The target side/symbol side division is not formal, and there are a number of exceptions. For instance, core file support involves symbolic elements (the basic core file reader is in BFD) and target elements (it supplies the contents of memory and the values of registers). Instead, this division is useful for understanding how the minor subsystems should fit together.

2.1 The Symbol Side

The symbolic side of gdb can be thought of as “everything you can do in gdb without having a live program running”. For instance, you can look at the types of variables, and evaluate many kinds of expressions.

2.2 The Target Side

The target side of gdb is the “bits and bytes manipulator”. Although it may make reference to symbolic info here and there, most of the target side will run with only a stripped executable available—or even no executable at all, in remote debugging cases.

Operations such as disassembly, stack frame crawls, and register display, are able to work with no symbolic info at all. In some cases, such as disassembly, gdb will use symbolic info to present addresses relative to symbols rather than as raw numbers, but it will work either way.

2.3 Configurations

Host refers to attributes of the system where gdb runs. Target refers to the system where the program being debugged executes. In most cases they are the same machine, in which case a third type of Native attributes come into play.

Defines and include files needed to build on the host are host support. Examples are tty support, system defined types, host byte order, host float format.

Defines and information needed to handle the target format are target dependent. Examples are the stack frame format, instruction set, breakpoint instruction, registers, and how to set up and tear down the stack to call a function.

Information that is only needed when the host and target are the same, is native dependent. One example is Unix child process support; if the host and target are not the same, doing a fork to start the target process is a bad idea. The various macros needed for finding the registers in the upage, running ptrace, and such are all in the native-dependent files.

Another example of native-dependent code is support for features that are really part of the target environment, but which require #include files that are only available on the host system. Core file handling and setjmp handling are two common cases.

When you want to make gdb work “native” on a particular machine, you have to include all three kinds of information.

Next: , Previous: Overall Structure, Up: Top

3 Algorithms

gdb uses a number of debugging-specific algorithms. They are often not very complicated, but get lost in the thicket of special cases and real-world issues. This chapter describes the basic algorithms and mentions some of the specific target definitions that they use.

3.1 Frames

A frame is a construct that gdb uses to keep track of calling and called functions.

FRAME_FP in the machine description has no meaning to the machine-independent part of gdb, except that it is used when setting up a new frame from scratch, as follows:

     create_new_frame (read_register (DEPRECATED_FP_REGNUM), read_pc ()));

Other than that, all the meaning imparted to DEPRECATED_FP_REGNUM is imparted by the machine-dependent code. So, DEPRECATED_FP_REGNUM can have any value that is convenient for the code that creates new frames. (create_new_frame calls DEPRECATED_INIT_EXTRA_FRAME_INFO if it is defined; that is where you should use the DEPRECATED_FP_REGNUM value, if your frames are nonstandard.)

Given a gdb frame, define DEPRECATED_FRAME_CHAIN to determine the address of the calling function's frame. This will be used to create a new gdb frame struct, and then DEPRECATED_INIT_EXTRA_FRAME_INFO and DEPRECATED_INIT_FRAME_PC will be called for the new frame.

3.2 Breakpoint Handling

In general, a breakpoint is a user-designated location in the program where the user wants to regain control if program execution ever reaches that location.

There are two main ways to implement breakpoints; either as “hardware” breakpoints or as “software” breakpoints.

Hardware breakpoints are sometimes available as a builtin debugging features with some chips. Typically these work by having dedicated register into which the breakpoint address may be stored. If the PC (shorthand for program counter) ever matches a value in a breakpoint registers, the CPU raises an exception and reports it to gdb.

Another possibility is when an emulator is in use; many emulators include circuitry that watches the address lines coming out from the processor, and force it to stop if the address matches a breakpoint's address.

A third possibility is that the target already has the ability to do breakpoints somehow; for instance, a ROM monitor may do its own software breakpoints. So although these are not literally “hardware breakpoints”, from gdb's point of view they work the same; gdb need not do anything more than set the breakpoint and wait for something to happen.

Since they depend on hardware resources, hardware breakpoints may be limited in number; when the user asks for more, gdb will start trying to set software breakpoints. (On some architectures, notably the 32-bit x86 platforms, gdb cannot always know whether there's enough hardware resources to insert all the hardware breakpoints and watchpoints. On those platforms, gdb prints an error message only when the program being debugged is continued.)

Software breakpoints require gdb to do somewhat more work. The basic theory is that gdb will replace a program instruction with a trap, illegal divide, or some other instruction that will cause an exception, and then when it's encountered, gdb will take the exception and stop the program. When the user says to continue, gdb will restore the original instruction, single-step, re-insert the trap, and continue on.

Since it literally overwrites the program being tested, the program area must be writable, so this technique won't work on programs in ROM. It can also distort the behavior of programs that examine themselves, although such a situation would be highly unusual.

Also, the software breakpoint instruction should be the smallest size of instruction, so it doesn't overwrite an instruction that might be a jump target, and cause disaster when the program jumps into the middle of the breakpoint instruction. (Strictly speaking, the breakpoint must be no larger than the smallest interval between instructions that may be jump targets; perhaps there is an architecture where only even-numbered instructions may jumped to.) Note that it's possible for an instruction set not to have any instructions usable for a software breakpoint, although in practice only the ARC has failed to define such an instruction.

The basic definition of the software breakpoint is the macro BREAKPOINT.

Basic breakpoint object handling is in breakpoint.c. However, much of the interesting breakpoint action is in infrun.c.

3.3 Single Stepping

3.4 Signal Handling

3.5 Thread Handling

3.6 Inferior Function Calls

3.7 Longjmp Support

gdb has support for figuring out that the target is doing a longjmp and for stopping at the target of the jump, if we are stepping. This is done with a few specialized internal breakpoints, which are visible in the output of the `maint info breakpoint' command.

To make this work, you need to define a macro called GET_LONGJMP_TARGET, which will examine the jmp_buf structure and extract the longjmp target address. Since jmp_buf is target specific, you will need to define it in the appropriate tm-target.h file. Look in tm-sun4os4.h and sparc-tdep.c for examples of how to do this.

3.8 Watchpoints

Watchpoints are a special kind of breakpoints (see breakpoints) which break when data is accessed rather than when some instruction is executed. When you have data which changes without your knowing what code does that, watchpoints are the silver bullet to hunt down and kill such bugs.

Watchpoints can be either hardware-assisted or not; the latter type is known as “software watchpoints.” gdb always uses hardware-assisted watchpoints if they are available, and falls back on software watchpoints otherwise. Typical situations where gdb will use software watchpoints are:

Software watchpoints are very slow, since gdb needs to single-step the program being debugged and test the value of the watched expression(s) after each instruction. The rest of this section is mostly irrelevant for software watchpoints.

When the inferior stops, gdb tries to establish, among other possible reasons, whether it stopped due to a watchpoint being hit. For a data-write watchpoint, it does so by evaluating, for each watchpoint, the expression whose value is being watched, and testing whether the watched value has changed. For data-read and data-access watchpoints, gdb needs the target to supply a primitive that returns the address of the data that was accessed or read (see the description of target_stopped_data_address below): if this primitive returns a valid address, gdb infers that a watchpoint triggered if it watches an expression whose evaluation uses that address.

gdb uses several macros and primitives to support hardware watchpoints:

If defined, the target supports hardware watchpoints.

Return the number of hardware watchpoints of type type that are possible to be set. The value is positive if count watchpoints of this type can be set, zero if setting watchpoints of this type is not supported, and negative if count is more than the maximum number of watchpoints of type type that can be set. other is non-zero if other types of watchpoints are currently enabled (there are architectures which cannot set watchpoints of different types at the same time).

Return non-zero if hardware watchpoints can be used to watch a region whose address is addr and whose length in bytes is len.

Return non-zero if hardware watchpoints can be used to watch a region whose size is size. gdb only uses this macro as a fall-back, in case TARGET_REGION_OK_FOR_HW_WATCHPOINT is not defined.

target_insert_watchpoint (addr, len, type)
target_remove_watchpoint (addr, len, type)
Insert or remove a hardware watchpoint starting at addr, for len bytes. type is the watchpoint type, one of the possible values of the enumerated data type target_hw_bp_type, defined by breakpoint.h as follows:
           enum target_hw_bp_type
               hw_write   = 0, /* Common (write) HW watchpoint */
               hw_read    = 1, /* Read    HW watchpoint */
               hw_access  = 2, /* Access (read or write) HW watchpoint */
               hw_execute = 3  /* Execute HW breakpoint */

These two macros should return 0 for success, non-zero for failure.

target_remove_hw_breakpoint (addr, shadow)
target_insert_hw_breakpoint (addr, shadow)
Insert or remove a hardware-assisted breakpoint at address addr. Returns zero for success, non-zero for failure. shadow is the real contents of the byte where the breakpoint has been inserted; it is generally not valid when hardware breakpoints are used, but since no other code touches these values, the implementations of the above two macros can use them for their internal purposes.

target_stopped_data_address (addr_p)
If the inferior has some watchpoint that triggered, place the address associated with the watchpoint at the location pointed to by addr_p and return non-zero. Otherwise, return zero. Note that this primitive is used by gdb only on targets that support data-read or data-access type watchpoints, so targets that have support only for data-write watchpoints need not implement these primitives.

If defined to a non-zero value, it is not necessary to disable a watchpoint to step over it.

If defined to a non-zero value, gdb should disable a watchpoint to step the inferior over it.

If defined to a non-zero value, it is possible to continue the inferior after a watchpoint has been hit.

If this is defined to a non-zero value, gdb will remove all watchpoints before stepping the inferior.

Return non-zero if stopped by a watchpoint. wait_status is of the type struct target_waitstatus, defined by target.h. Normally, this macro is defined to invoke the function pointed to by the to_stopped_by_watchpoint member of the structure (of the type target_ops, defined on target.h) that describes the target-specific operations; to_stopped_by_watchpoint ignores the wait_status argument.

gdb does not require the non-zero value returned by STOPPED_BY_WATCHPOINT to be 100% correct, so if a target cannot determine for sure whether the inferior stopped due to a watchpoint, it could return non-zero “just in case”.

3.8.1 x86 Watchpoints

The 32-bit Intel x86 (a.k.a. ia32) processors feature special debug registers designed to facilitate debugging. gdb provides a generic library of functions that x86-based ports can use to implement support for watchpoints and hardware-assisted breakpoints. This subsection documents the x86 watchpoint facilities in gdb.

To use the generic x86 watchpoint support, a port should do the following:

The x86 watchpoint support works by maintaining mirror images of the debug registers. Values are copied between the mirror images and the real debug registers via a set of macros which each target needs to provide:

Set the Debug Control (DR7) register to the value val.

I386_DR_LOW_SET_ADDR (idx, addr)
Put the address addr into the debug register number idx.

Reset (i.e. zero out) the address stored in the debug register number idx.

Return the value of the Debug Status (DR6) register. This value is used immediately after it is returned by I386_DR_LOW_GET_STATUS, so as to support per-thread status register values.

For each one of the 4 debug registers (whose indices are from 0 to 3) that store addresses, a reference count is maintained by gdb, to allow sharing of debug registers by several watchpoints. This allows users to define several watchpoints that watch the same expression, but with different conditions and/or commands, without wasting debug registers which are in short supply. gdb maintains the reference counts internally, targets don't have to do anything to use this feature.

The x86 debug registers can each watch a region that is 1, 2, or 4 bytes long. The ia32 architecture requires that each watched region be appropriately aligned: 2-byte region on 2-byte boundary, 4-byte region on 4-byte boundary. However, the x86 watchpoint support in gdb can watch unaligned regions and regions larger than 4 bytes (up to 16 bytes) by allocating several debug registers to watch a single region. This allocation of several registers per a watched region is also done automatically without target code intervention.

The generic x86 watchpoint support provides the following API for the gdb's application code:

i386_region_ok_for_watchpoint (addr, len)
The macro TARGET_REGION_OK_FOR_HW_WATCHPOINT is set to call this function. It counts the number of debug registers required to watch a given region, and returns a non-zero value if that number is less than 4, the number of debug registers available to x86 processors.

i386_stopped_data_address (addr_p)
The target function target_stopped_data_address is set to call this function. This function examines the breakpoint condition bits in the DR6 Debug Status register, as returned by the I386_DR_LOW_GET_STATUS macro, and returns the address associated with the first bit that is set in DR6.

i386_stopped_by_watchpoint (void)
The macro STOPPED_BY_WATCHPOINT is set to call this function. The argument passed to STOPPED_BY_WATCHPOINT is ignored. This function examines the breakpoint condition bits in the DR6 Debug Status register, as returned by the I386_DR_LOW_GET_STATUS macro, and returns true if any bit is set. Otherwise, false is returned.

i386_insert_watchpoint (addr, len, type)
i386_remove_watchpoint (addr, len, type)
Insert or remove a watchpoint. The macros target_insert_watchpoint and target_remove_watchpoint are set to call these functions. i386_insert_watchpoint first looks for a debug register which is already set to watch the same region for the same access types; if found, it just increments the reference count of that debug register, thus implementing debug register sharing between watchpoints. If no such register is found, the function looks for a vacant debug register, sets its mirrored value to addr, sets the mirrored value of DR7 Debug Control register as appropriate for the len and type parameters, and then passes the new values of the debug register and DR7 to the inferior by calling I386_DR_LOW_SET_ADDR and I386_DR_LOW_SET_CONTROL. If more than one debug register is required to cover the given region, the above process is repeated for each debug register.

i386_remove_watchpoint does the opposite: it resets the address in the mirrored value of the debug register and its read/write and length bits in the mirrored value of DR7, then passes these new values to the inferior via I386_DR_LOW_RESET_ADDR and I386_DR_LOW_SET_CONTROL. If a register is shared by several watchpoints, each time a i386_remove_watchpoint is called, it decrements the reference count, and only calls I386_DR_LOW_RESET_ADDR and I386_DR_LOW_SET_CONTROL when the count goes to zero.

i386_insert_hw_breakpoint (addr, shadow
i386_remove_hw_breakpoint (addr, shadow)
These functions insert and remove hardware-assisted breakpoints. The macros target_insert_hw_breakpoint and target_remove_hw_breakpoint are set to call these functions. These functions work like i386_insert_watchpoint and i386_remove_watchpoint, respectively, except that they set up the debug registers to watch instruction execution, and each hardware-assisted breakpoint always requires exactly one debug register.

i386_stopped_by_hwbp (void)
This function returns non-zero if the inferior has some watchpoint or hardware breakpoint that triggered. It works like i386_stopped_data_address, except that it doesn't record the address whose watchpoint triggered.

i386_cleanup_dregs (void)
This function clears all the reference counts, addresses, and control bits in the mirror images of the debug registers. It doesn't affect the actual debug registers in the inferior process.


  1. x86 processors support setting watchpoints on I/O reads or writes. However, since no target supports this (as of March 2001), and since enum target_hw_bp_type doesn't even have an enumeration for I/O watchpoints, this feature is not yet available to gdb running on x86.
  2. x86 processors can enable watchpoints locally, for the current task only, or globally, for all the tasks. For each debug register, there's a bit in the DR7 Debug Control register that determines whether the associated address is watched locally or globally. The current implementation of x86 watchpoint support in gdb always sets watchpoints to be locally enabled, since global watchpoints might interfere with the underlying OS and are probably unavailable in many platforms.

3.9 Observing changes in gdb internals

In order to function properly, several modules need to be notified when some changes occur in the gdb internals. Traditionally, these modules have relied on several paradigms, the most common ones being hooks and gdb-events. Unfortunately, none of these paradigms was versatile enough to become the standard notification mechanism in gdb. The fact that they only supported one “client” was also a strong limitation.

A new paradigm, based on the Observer pattern of the Design Patterns book, has therefore been implemented. The goal was to provide a new interface overcoming the issues with the notification mechanisms previously available. This new interface needed to be strongly typed, easy to extend, and versatile enough to be used as the standard interface when adding new notifications.

See GDB Observers for a brief description of the observers currently implemented in GDB. The rationale for the current implementation is also briefly discussed.

Next: , Previous: Algorithms, Up: Top

4 User Interface

gdb has several user interfaces. Although the command-line interface is the most common and most familiar, there are others.

4.1 Command Interpreter

The command interpreter in gdb is fairly simple. It is designed to allow for the set of commands to be augmented dynamically, and also has a recursive subcommand capability, where the first argument to a command may itself direct a lookup on a different command list.

For instance, the `set' command just starts a lookup on the setlist command list, while `set thread' recurses to the set_thread_cmd_list.

To add commands in general, use add_cmd. add_com adds to the main command list, and should be used for those commands. The usual place to add commands is in the _initialize_xyz routines at the ends of most source files.

To add paired `set' and `show' commands, use add_setshow_cmd or add_setshow_cmd_full. The former is a slightly simpler interface which is useful when you don't need to further modify the new command structures, while the latter returns the new command structures for manipulation.

Before removing commands from the command set it is a good idea to deprecate them for some time. Use deprecate_cmd on commands or aliases to set the deprecated flag. deprecate_cmd takes a struct cmd_list_element as it's first argument. You can use the return value from add_com or add_cmd to deprecate the command immediately after it is created.

The first time a command is used the user will be warned and offered a replacement (if one exists). Note that the replacement string passed to deprecate_cmd should be the full name of the command, i.e. the entire string the user should type at the command line.

4.2 UI-Independent Output—the ui_out Functions

The ui_out functions present an abstraction level for the gdb output code. They hide the specifics of different user interfaces supported by gdb, and thus free the programmer from the need to write several versions of the same code, one each for every UI, to produce output.

4.2.1 Overview and Terminology

In general, execution of each gdb command produces some sort of output, and can even generate an input request.

Output can be generated for the following purposes:

This section mainly concentrates on how to build result output, although some of it also applies to other kinds of output.

Generation of output that displays the results of an operation involves one or more of the following:

The ui_out routines take care of the first three aspects. Annotations are provided by separate annotation routines. Note that use of annotations for an interface between a GUI and gdb is deprecated.

Output can be in the form of a single item, which we call a field; a list consisting of identical fields; a tuple consisting of non-identical fields; or a table, which is a tuple consisting of a header and a body. In a BNF-like form:

<table> ==>
<header> <body>
<header> ==>
{ <column> }
<column> ==>
<width> <alignment> <title>
<body> ==>

4.2.2 General Conventions

Most ui_out routines are of type void, the exceptions are ui_out_stream_new (which returns a pointer to the newly created object) and the make_cleanup routines.

The first parameter is always the ui_out vector object, a pointer to a struct ui_out.

The format parameter is like in printf family of functions. When it is present, there must also be a variable list of arguments sufficient used to satisfy the % specifiers in the supplied format.

When a character string argument is not used in a ui_out function call, a NULL pointer has to be supplied instead.

4.2.3 Table, Tuple and List Functions

This section introduces ui_out routines for building lists, tuples and tables. The routines to output the actual data items (fields) are presented in the next section.

To recap: A tuple is a sequence of fields, each field containing information about an object; a list is a sequence of fields where each field describes an identical object.

Use the table functions when your output consists of a list of rows (tuples) and the console output should include a heading. Use this even when you are listing just one object but you still want the header.

Tables can not be nested. Tuples and lists can be nested up to a maximum of five levels.

The overall structure of the table output code is something like this:


Here is the description of table-, tuple- and list-related ui_out functions:

— Function: void ui_out_table_begin (struct ui_out *uiout, int nbrofcols, int nr_rows, const char *tblid)

The function ui_out_table_begin marks the beginning of the output of a table. It should always be called before any other ui_out function for a given table. nbrofcols is the number of columns in the table. nr_rows is the number of rows in the table. tblid is an optional string identifying the table. The string pointed to by tblid is copied by the implementation of ui_out_table_begin, so the application can free the string if it was malloced.

The companion function ui_out_table_end, described below, marks the end of the table's output.

— Function: void ui_out_table_header (struct ui_out *uiout, int width, enum ui_align alignment, const char *colhdr)

ui_out_table_header provides the header information for a single table column. You call this function several times, one each for every column of the table, after ui_out_table_begin, but before ui_out_table_body.

The value of width gives the column width in characters. The value of alignment is one of left, center, and right, and it specifies how to align the header: left-justify, center, or right-justify it. colhdr points to a string that specifies the column header; the implementation copies that string, so column header strings in malloced storage can be freed after the call.

— Function: void ui_out_table_body (struct ui_out *uiout)

This function delimits the table header from the table body.

— Function: void ui_out_table_end (struct ui_out *uiout)

This function signals the end of a table's output. It should be called after the table body has been produced by the list and field output functions.

There should be exactly one call to ui_out_table_end for each call to ui_out_table_begin, otherwise the ui_out functions will signal an internal error.

The output of the tuples that represent the table rows must follow the call to ui_out_table_body and precede the call to ui_out_table_end. You build a tuple by calling ui_out_tuple_begin and ui_out_tuple_end, with suitable calls to functions which actually output fields between them.

— Function: void ui_out_tuple_begin (struct ui_out *uiout, const char *id)

This function marks the beginning of a tuple output. id points to an optional string that identifies the tuple; it is copied by the implementation, and so strings in malloced storage can be freed after the call.

— Function: void ui_out_tuple_end (struct ui_out *uiout)

This function signals an end of a tuple output. There should be exactly one call to ui_out_tuple_end for each call to ui_out_tuple_begin, otherwise an internal gdb error will be signaled.

— Function: struct cleanup *make_cleanup_ui_out_tuple_begin_end (struct ui_out *uiout, const char *id)

This function first opens the tuple and then establishes a cleanup (see Cleanups) to close the tuple. It provides a convenient and correct implementation of the non-portable1 code sequence:

          struct cleanup *old_cleanup;
          ui_out_tuple_begin (uiout, "...");
          old_cleanup = make_cleanup ((void(*)(void *)) ui_out_tuple_end,
— Function: void ui_out_list_begin (struct ui_out *uiout, const char *id)

This function marks the beginning of a list output. id points to an optional string that identifies the list; it is copied by the implementation, and so strings in malloced storage can be freed after the call.

— Function: void ui_out_list_end (struct ui_out *uiout)

This function signals an end of a list output. There should be exactly one call to ui_out_list_end for each call to ui_out_list_begin, otherwise an internal gdb error will be signaled.

— Function: struct cleanup *make_cleanup_ui_out_list_begin_end (struct ui_out *uiout, const char *id)

Similar to make_cleanup_ui_out_tuple_begin_end, this function opens a list and then establishes cleanup (see Cleanups) that will close the list.list.

4.2.4 Item Output Functions

The functions described below produce output for the actual data items, or fields, which contain information about the object.

Choose the appropriate function accordingly to your particular needs.

— Function: void ui_out_field_fmt (struct ui_out *uiout, char *fldname, char *format, ...)

This is the most general output function. It produces the representation of the data in the variable-length argument list according to formatting specifications in format, a printf-like format string. The optional argument fldname supplies the name of the field. The data items themselves are supplied as additional arguments after format.

This generic function should be used only when it is not possible to use one of the specialized versions (see below).

— Function: void ui_out_field_int (struct ui_out *uiout, const char *fldname, int value)

This function outputs a value of an int variable. It uses the "%d" output conversion specification. fldname specifies the name of the field.

— Function: void ui_out_field_fmt_int (struct ui_out *uiout, int width, enum ui_align alignment, const char *fldname, int value)

This function outputs a value of an int variable. It differs from ui_out_field_int in that the caller specifies the desired width and alignment of the output. fldname specifies the name of the field.

— Function: void ui_out_field_core_addr (struct ui_out *uiout, const char *fldname, CORE_ADDR address)

This function outputs an address.

— Function: void ui_out_field_string (struct ui_out *uiout, const char *fldname, const char *string)

This function outputs a string using the "%s" conversion specification.

Sometimes, there's a need to compose your output piece by piece using functions that operate on a stream, such as value_print or fprintf_symbol_filtered. These functions accept an argument of the type struct ui_file *, a pointer to a ui_file object used to store the data stream used for the output. When you use one of these functions, you need a way to pass their results stored in a ui_file object to the ui_out functions. To this end, you first create a ui_stream object by calling ui_out_stream_new, pass the stream member of that ui_stream object to value_print and similar functions, and finally call ui_out_field_stream to output the field you constructed. When the ui_stream object is no longer needed, you should destroy it and free its memory by calling ui_out_stream_delete.

— Function: struct ui_stream *ui_out_stream_new (struct ui_out *uiout)

This function creates a new ui_stream object which uses the same output methods as the ui_out object whose pointer is passed in uiout. It returns a pointer to the newly created ui_stream object.

— Function: void ui_out_stream_delete (struct ui_stream *streambuf)

This functions destroys a ui_stream object specified by streambuf.

— Function: void ui_out_field_stream (struct ui_out *uiout, const char *fieldname, struct ui_stream *streambuf)

This function consumes all the data accumulated in streambuf->stream and outputs it like ui_out_field_string does. After a call to ui_out_field_stream, the accumulated data no longer exists, but the stream is still valid and may be used for producing more fields.

Important: If there is any chance that your code could bail out before completing output generation and reaching the point where ui_out_stream_delete is called, it is necessary to set up a cleanup, to avoid leaking memory and other resources. Here's a skeleton code to do that:

      struct ui_stream *mybuf = ui_out_stream_new (uiout);
      struct cleanup *old = make_cleanup (ui_out_stream_delete, mybuf);
      do_cleanups (old);

If the function already has the old cleanup chain set (for other kinds of cleanups), you just have to add your cleanup to it:

       mybuf = ui_out_stream_new (uiout);
       make_cleanup (ui_out_stream_delete, mybuf);

Note that with cleanups in place, you should not call ui_out_stream_delete directly, or you would attempt to free the same buffer twice.

4.2.5 Utility Output Functions

— Function: void ui_out_field_skip (struct ui_out *uiout, const char *fldname)

This function skips a field in a table. Use it if you have to leave an empty field without disrupting the table alignment. The argument fldname specifies a name for the (missing) filed.

— Function: void ui_out_text (struct ui_out *uiout, const char *string)

This function outputs the text in string in a way that makes it easy to be read by humans. For example, the console implementation of this method filters the text through a built-in pager, to prevent it from scrolling off the visible portion of the screen.

Use this function for printing relatively long chunks of text around the actual field data: the text it produces is not aligned according to the table's format. Use ui_out_field_string to output a string field, and use ui_out_message, described below, to output short messages.

— Function: void ui_out_spaces (struct ui_out *uiout, int nspaces)

This function outputs nspaces spaces. It is handy to align the text produced by ui_out_text with the rest of the table or list.

— Function: void ui_out_message (struct ui_out *uiout, int verbosity, const char *format, ...)

This function produces a formatted message, provided that the current verbosity level is at least as large as given by verbosity. The current verbosity level is specified by the user with the `set verbositylevel' command.2

— Function: void ui_out_wrap_hint (struct ui_out *uiout, char *indent)

This function gives the console output filter (a paging filter) a hint of where to break lines which are too long. Ignored for all other output consumers. indent, if non-NULL, is the string to be printed to indent the wrapped text on the next line; it must remain accessible until the next call to ui_out_wrap_hint, or until an explicit newline is produced by one of the other functions. If indent is NULL, the wrapped text will not be indented.

— Function: void ui_out_flush (struct ui_out *uiout)

This function flushes whatever output has been accumulated so far, if the UI buffers output.

4.2.6 Examples of Use of ui_out functions

This section gives some practical examples of using the ui_out functions to generalize the old console-oriented code in gdb. The examples all come from functions defined on the breakpoints.c file.

This example, from the breakpoint_1 function, shows how to produce a table.

The original code was:

      if (!found_a_breakpoint++)
          annotate_breakpoints_headers ();
          annotate_field (0);
          printf_filtered ("Num ");
          annotate_field (1);
          printf_filtered ("Type           ");
          annotate_field (2);
          printf_filtered ("Disp ");
          annotate_field (3);
          printf_filtered ("Enb ");
          if (addressprint)
              annotate_field (4);
              printf_filtered ("Address    ");
          annotate_field (5);
          printf_filtered ("What\n");
          annotate_breakpoints_table ();

Here's the new version:

       nr_printable_breakpoints = ...;
       if (addressprint)
         ui_out_table_begin (ui, 6, nr_printable_breakpoints, "BreakpointTable");
         ui_out_table_begin (ui, 5, nr_printable_breakpoints, "BreakpointTable");
       if (nr_printable_breakpoints > 0)
         annotate_breakpoints_headers ();
       if (nr_printable_breakpoints > 0)
         annotate_field (0);
       ui_out_table_header (uiout, 3, ui_left, "number", "Num");		/* 1 */
       if (nr_printable_breakpoints > 0)
         annotate_field (1);
       ui_out_table_header (uiout, 14, ui_left, "type", "Type");		/* 2 */
       if (nr_printable_breakpoints > 0)
         annotate_field (2);
       ui_out_table_header (uiout, 4, ui_left, "disp", "Disp");		/* 3 */
       if (nr_printable_breakpoints > 0)
         annotate_field (3);
       ui_out_table_header (uiout, 3, ui_left, "enabled", "Enb");	/* 4 */
       if (addressprint)
          if (nr_printable_breakpoints > 0)
            annotate_field (4);
          if (TARGET_ADDR_BIT <= 32)
            ui_out_table_header (uiout, 10, ui_left, "addr", "Address");/* 5 */
            ui_out_table_header (uiout, 18, ui_left, "addr", "Address");/* 5 */
       if (nr_printable_breakpoints > 0)
         annotate_field (5);
       ui_out_table_header (uiout, 40, ui_noalign, "what", "What");	/* 6 */
       ui_out_table_body (uiout);
       if (nr_printable_breakpoints > 0)
         annotate_breakpoints_table ();

This example, from the print_one_breakpoint function, shows how to produce the actual data for the table whose structure was defined in the above example. The original code was:

        annotate_record ();
        annotate_field (0);
        printf_filtered ("%-3d ", b->number);
        annotate_field (1);
        if ((int)b->type > (sizeof(bptypes)/sizeof(bptypes[0]))
            || ((int) b->type != bptypes[(int) b->type].type))
          internal_error ("bptypes table does not describe type #%d.",
        printf_filtered ("%-14s ", bptypes[(int)b->type].description);
        annotate_field (2);
        printf_filtered ("%-4s ", bpdisps[(int)b->disposition]);
        annotate_field (3);
        printf_filtered ("%-3c ", bpenables[(int)b->enable]);

This is the new version:

        annotate_record ();
        ui_out_tuple_begin (uiout, "bkpt");
        annotate_field (0);
        ui_out_field_int (uiout, "number", b->number);
        annotate_field (1);
        if (((int) b->type > (sizeof (bptypes) / sizeof (bptypes[0])))
            || ((int) b->type != bptypes[(int) b->type].type))
          internal_error ("bptypes table does not describe type #%d.",
                          (int) b->type);
        ui_out_field_string (uiout, "type", bptypes[(int)b->type].description);
        annotate_field (2);
        ui_out_field_string (uiout, "disp", bpdisps[(int)b->disposition]);
        annotate_field (3);
        ui_out_field_fmt (uiout, "enabled", "%c", bpenables[(int)b->enable]);

This example, also from print_one_breakpoint, shows how to produce a complicated output field using the print_expression functions which requires a stream to be passed. It also shows how to automate stream destruction with cleanups. The original code was:

         annotate_field (5);
         print_expression (b->exp, gdb_stdout);

The new version is:

       struct ui_stream *stb = ui_out_stream_new (uiout);
       struct cleanup *old_chain = make_cleanup_ui_out_stream_delete (stb);
       annotate_field (5);
       print_expression (b->exp, stb->stream);
       ui_out_field_stream (uiout, "what", local_stream);

This example, also from print_one_breakpoint, shows how to use ui_out_text and ui_out_field_string. The original code was:

       annotate_field (5);
       if (b->dll_pathname == NULL)
         printf_filtered ("<any library> ");
         printf_filtered ("library \"%s\" ", b->dll_pathname);

It became:

       annotate_field (5);
       if (b->dll_pathname == NULL)
           ui_out_field_string (uiout, "what", "<any library>");
           ui_out_spaces (uiout, 1);
           ui_out_text (uiout, "library \"");
           ui_out_field_string (uiout, "what", b->dll_pathname);
           ui_out_text (uiout, "\" ");

The following example from print_one_breakpoint shows how to use ui_out_field_int and ui_out_spaces. The original code was:

       annotate_field (5);
       if (b->forked_inferior_pid != 0)
         printf_filtered ("process %d ", b->forked_inferior_pid);

It became:

       annotate_field (5);
       if (b->forked_inferior_pid != 0)
           ui_out_text (uiout, "process ");
           ui_out_field_int (uiout, "what", b->forked_inferior_pid);
           ui_out_spaces (uiout, 1);

Here's an example of using ui_out_field_string. The original code was:

       annotate_field (5);
       if (b->exec_pathname != NULL)
         printf_filtered ("program \"%s\" ", b->exec_pathname);

It became:

       annotate_field (5);
       if (b->exec_pathname != NULL)
           ui_out_text (uiout, "program \"");
           ui_out_field_string (uiout, "what", b->exec_pathname);
           ui_out_text (uiout, "\" ");

Finally, here's an example of printing an address. The original code:

       annotate_field (4);
       printf_filtered ("%s ",
             hex_string_custom ((unsigned long) b->address, 8));

It became:

       annotate_field (4);
       ui_out_field_core_addr (uiout, "Address", b->address);

4.3 Console Printing

4.4 TUI

Next: , Previous: User Interface, Up: Top

5 libgdb

5.1 libgdb 1.0

libgdb 1.0 was an abortive project of years ago. The theory was to provide an API to gdb's functionality.

5.2 libgdb 2.0

libgdb 2.0 is an ongoing effort to update gdb so that is better able to support graphical and other environments.

Since libgdb development is on-going, its architecture is still evolving. The following components have so far been identified:

The model that ties these components together is described below.

5.3 The libgdb Model

A client of libgdb interacts with the library in two ways.

Since libgdb could have multiple clients (e.g. a GUI supporting the existing gdb CLI), those clients must co-operate when controlling libgdb. In particular, a client must ensure that libgdb is idle (i.e. no other client is using libgdb) before responding to a gdb-event by making a query.

5.4 CLI support

At present gdb's CLI is very much entangled in with the core of libgdb. Consequently, a client wishing to include the CLI in their interface needs to carefully co-ordinate its own and the CLI's requirements.

It is suggested that the client set libgdb up to be bi-modal (alternate between CLI and client query modes). The notes below sketch out the theory:

When the client receives input intended for the CLI, it simply passes it along. Since the cli-out builder is installed by default, all the CLI output in response to that command is routed (pronounced rooted) through to the client controlled gdb_stdout et. al. streams. At the same time, the client is kept abreast of internal changes by virtue of being a libgdb observer.

The only restriction on the client is that it must wait until libgdb becomes idle before initiating any queries (using the client's custom builder).

5.5 libgdb components

Observer - gdb-events.h

gdb-events provides the client with a very raw mechanism that can be used to implement an observer. At present it only allows for one observer and that observer must, internally, handle the need to delay the processing of any event notifications until after libgdb has finished the current command.

Builder - ui-out.h

ui-out provides the infrastructure necessary for a client to create a builder. That builder is then passed down to libgdb when doing any queries.

Event Loop - event-loop.h

event-loop, currently non-re-entrant, provides a simple event loop. A client would need to either plug its self into this loop or, implement a new event-loop that GDB would use.

The event-loop will eventually be made re-entrant. This is so that gdb can better handle the problem of some commands blocking instead of returning.

Library - gdb.h

libgdb is the most obvious component of this system. It provides the query interface. Each function is parameterized by a ui-out builder. The result of the query is constructed using that builder before the query function returns.

Next: , Previous: libgdb, Up: Top

6 Symbol Handling

Symbols are a key part of gdb's operation. Symbols include variables, functions, and types.

6.1 Symbol Reading

gdb reads symbols from symbol files. The usual symbol file is the file containing the program which gdb is debugging. gdb can be directed to use a different file for symbols (with the `symbol-file' command), and it can also read more symbols via the `add-file' and `load' commands, or while reading symbols from shared libraries.

Symbol files are initially opened by code in symfile.c using the BFD library (see Support Libraries). BFD identifies the type of the file by examining its header. find_sym_fns then uses this identification to locate a set of symbol-reading functions.

Symbol-reading modules identify themselves to gdb by calling add_symtab_fns during their module initialization. The argument to add_symtab_fns is a struct sym_fns which contains the name (or name prefix) of the symbol format, the length of the prefix, and pointers to four functions. These functions are called at various times to process symbol files whose identification matches the specified prefix.

The functions supplied by each module are:

xyz_symfile_init(struct sym_fns *sf)
Called from symbol_file_add when we are about to read a new symbol file. This function should clean up any internal state (possibly resulting from half-read previous files, for example) and prepare to read a new symbol file. Note that the symbol file which we are reading might be a new “main” symbol file, or might be a secondary symbol file whose symbols are being added to the existing symbol table.

The argument to xyz_symfile_init is a newly allocated struct sym_fns whose bfd field contains the BFD for the new symbol file being read. Its private field has been zeroed, and can be modified as desired. Typically, a struct of private information will be malloc'd, and a pointer to it will be placed in the private field.

There is no result from xyz_symfile_init, but it can call error if it detects an unavoidable problem.

Called from symbol_file_add when discarding existing symbols. This function needs only handle the symbol-reading module's internal state; the symbol table data structures visible to the rest of gdb will be discarded by symbol_file_add. It has no arguments and no result. It may be called after xyz_symfile_init, if a new symbol table is being read, or may be called alone if all symbols are simply being discarded.
xyz_symfile_read(struct sym_fns *sf, CORE_ADDR addr, int mainline)
Called from symbol_file_add to actually read the symbols from a symbol-file into a set of psymtabs or symtabs.

sf points to the struct sym_fns originally passed to xyz_sym_init for possible initialization. addr is the offset between the file's specified start address and its true address in memory. mainline is 1 if this is the main symbol table being read, and 0 if a secondary symbol file (e.g. shared library or dynamically loaded file) is being read.

In addition, if a symbol-reading module creates psymtabs when xyz_symfile_read is called, these psymtabs will contain a pointer to a function xyz_psymtab_to_symtab, which can be called from any point in the gdb symbol-handling code.

xyz_psymtab_to_symtab (struct partial_symtab *pst)
Called from psymtab_to_symtab (or the PSYMTAB_TO_SYMTAB macro) if the psymtab has not already been read in and had its pst->symtab pointer set. The argument is the psymtab to be fleshed-out into a symtab. Upon return, pst->readin should have been set to 1, and pst->symtab should contain a pointer to the new corresponding symtab, or zero if there were no symbols in that part of the symbol file.

6.2 Partial Symbol Tables

gdb has three types of symbol tables:

This section describes partial symbol tables.

A psymtab is constructed by doing a very quick pass over an executable file's debugging information. Small amounts of information are extracted—enough to identify which parts of the symbol table will need to be re-read and fully digested later, when the user needs the information. The speed of this pass causes gdb to start up very quickly. Later, as the detailed rereading occurs, it occurs in small pieces, at various times, and the delay therefrom is mostly invisible to the user.

The symbols that show up in a file's psymtab should be, roughly, those visible to the debugger's user when the program is not running code from that file. These include external symbols and types, static symbols and types, and enum values declared at file scope.

The psymtab also contains the range of instruction addresses that the full symbol table would represent.

The idea is that there are only two ways for the user (or much of the code in the debugger) to reference a symbol:

The only reason that psymtabs exist is to cause a symtab to be read in at the right moment. Any symbol that can be elided from a psymtab, while still causing that to happen, should not appear in it. Since psymtabs don't have the idea of scope, you can't put local symbols in them anyway. Psymtabs don't have the idea of the type of a symbol, either, so types need not appear, unless they will be referenced by name.

It is a bug for gdb to behave one way when only a psymtab has been read, and another way if the corresponding symtab has been read in. Such bugs are typically caused by a psymtab that does not contain all the visible symbols, or which has the wrong instruction address ranges.

The psymtab for a particular section of a symbol file (objfile) could be thrown away after the symtab has been read in. The symtab should always be searched before the psymtab, so the psymtab will never be used (in a bug-free environment). Currently, psymtabs are allocated on an obstack, and all the psymbols themselves are allocated in a pair of large arrays on an obstack, so there is little to be gained by trying to free them unless you want to do a lot more work.

6.3 Types

Fundamental Types (e.g., FT_VOID, FT_BOOLEAN).

These are the fundamental types that gdb uses internally. Fundamental types from the various debugging formats (stabs, ELF, etc) are mapped into one of these. They are basically a union of all fundamental types that gdb knows about for all the languages that gdb knows about.


Each time gdb builds an internal type, it marks it with one of these types. The type may be a fundamental type, such as TYPE_CODE_INT, or a derived type, such as TYPE_CODE_PTR which is a pointer to another type. Typically, several FT_* types map to one TYPE_CODE_* type, and are distinguished by other members of the type struct, such as whether the type is signed or unsigned, and how many bits it uses.

Builtin Types (e.g., builtin_type_void, builtin_type_char).

These are instances of type structs that roughly correspond to fundamental types and are created as global types for gdb to use for various ugly historical reasons. We eventually want to eliminate these. Note for example that builtin_type_int initialized in gdbtypes.c is basically the same as a TYPE_CODE_INT type that is initialized in c-lang.c for an FT_INTEGER fundamental type. The difference is that the builtin_type is not associated with any particular objfile, and only one instance exists, while c-lang.c builds as many TYPE_CODE_INT types as needed, with each one associated with some particular objfile.

6.4 Object File Formats

6.4.1 a.out

The a.out format is the original file format for Unix. It consists of three sections: text, data, and bss, which are for program code, initialized data, and uninitialized data, respectively.

The a.out format is so simple that it doesn't have any reserved place for debugging information. (Hey, the original Unix hackers used `adb', which is a machine-language debugger!) The only debugging format for a.out is stabs, which is encoded as a set of normal symbols with distinctive attributes.

The basic a.out reader is in dbxread.c.

6.4.2 COFF

The COFF format was introduced with System V Release 3 (SVR3) Unix. COFF files may have multiple sections, each prefixed by a header. The number of sections is limited.

The COFF specification includes support for debugging. Although this was a step forward, the debugging information was woefully limited. For instance, it was not possible to represent code that came from an included file.

The COFF reader is in coffread.c.

6.4.3 ECOFF

ECOFF is an extended COFF originally introduced for Mips and Alpha workstations.

The basic ECOFF reader is in mipsread.c.

6.4.4 XCOFF

The IBM RS/6000 running AIX uses an object file format called XCOFF. The COFF sections, symbols, and line numbers are used, but debugging symbols are dbx-style stabs whose strings are located in the .debug section (rather than the string table). For more information, see Top.

The shared library scheme has a clean interface for figuring out what shared libraries are in use, but the catch is that everything which refers to addresses (symbol tables and breakpoints at least) needs to be relocated for both shared libraries and the main executable. At least using the standard mechanism this can only be done once the program has been run (or the core file has been read).

6.4.5 PE

Windows 95 and NT use the PE (Portable Executable) format for their executables. PE is basically COFF with additional headers.

While BFD includes special PE support, gdb needs only the basic COFF reader.

6.4.6 ELF

The ELF format came with System V Release 4 (SVR4) Unix. ELF is similar to COFF in being organized into a number of sections, but it removes many of COFF's limitations.

The basic ELF reader is in elfread.c.

6.4.7 SOM

SOM is HP's object file and debug format (not to be confused with IBM's SOM, which is a cross-language ABI).

The SOM reader is in hpread.c.

6.4.8 Other File Formats

Other file formats that have been supported by gdb include Netware Loadable Modules (nlmread.c).

6.5 Debugging File Formats

This section describes characteristics of debugging information that are independent of the object file format.

6.5.1 stabs

stabs started out as special symbols within the a.out format. Since then, it has been encapsulated into other file formats, such as COFF and ELF.

While dbxread.c does some of the basic stab processing, including for encapsulated versions, stabsread.c does the real work.

6.5.2 COFF

The basic COFF definition includes debugging information. The level of support is minimal and non-extensible, and is not often used.

6.5.3 Mips debug (Third Eye)

ECOFF includes a definition of a special debug format.

The file mdebugread.c implements reading for this format.

6.5.4 DWARF 1

DWARF 1 is a debugging format that was originally designed to be used with ELF in SVR4 systems.

The DWARF 1 reader is in dwarfread.c.

6.5.5 DWARF 2

DWARF 2 is an improved but incompatible version of DWARF 1.

The DWARF 2 reader is in dwarf2read.c.

6.5.6 SOM

Like COFF, the SOM definition includes debugging information.

6.6 Adding a New Symbol Reader to gdb

If you are using an existing object file format (a.out, COFF, ELF, etc), there is probably little to be done.

If you need to add a new object file format, you must first add it to BFD. This is beyond the scope of this document.

You must then arrange for the BFD code to provide access to the debugging symbols. Generally gdb will have to call swapping routines from BFD and a few other BFD internal routines to locate the debugging information. As much as possible, gdb should not depend on the BFD internal data structures.

For some targets (e.g., COFF), there is a special transfer vector used to call swapping routines, since the external data structures on various platforms have different sizes and layouts. Specialized routines that will only ever be implemented by one object file format may be called directly. This interface should be described in a file bfd/libxyz.h, which is included by gdb.

Next: , Previous: Symbol Handling, Up: Top

7 Language Support

gdb's language support is mainly driven by the symbol reader, although it is possible for the user to set the source language manually.

gdb chooses the source language by looking at the extension of the file recorded in the debug info; .c means C, .f means Fortran, etc. It may also use a special-purpose language identifier if the debug format supports it, like with DWARF.

7.1 Adding a Source Language to gdb

To add other languages to gdb's expression parser, follow the following steps:

Create the expression parser.
This should reside in a file lang-exp.y. Routines for building parsed expressions into a union exp_element list are in parse.c.

Since we can't depend upon everyone having Bison, and YACC produces parsers that define a bunch of global names, the following lines must be included at the top of the YACC parser, to prevent the various parsers from defining the same global names:

          #define yyparse         lang_parse
          #define yylex           lang_lex
          #define yyerror         lang_error
          #define yylval          lang_lval
          #define yychar          lang_char
          #define yydebug         lang_debug
          #define yypact          lang_pact
          #define yyr1            lang_r1
          #define yyr2            lang_r2
          #define yydef           lang_def
          #define yychk           lang_chk
          #define yypgo           lang_pgo
          #define yyact           lang_act
          #define yyexca          lang_exca
          #define yyerrflag       lang_errflag
          #define yynerrs         lang_nerrs

At the bottom of your parser, define a struct language_defn and initialize it with the right values for your language. Define an initialize_lang routine and have it call `add_language(lang_language_defn)' to tell the rest of gdb that your language exists. You'll need some other supporting variables and functions, which will be used via pointers from your lang_language_defn. See the declaration of struct language_defn in language.h, and the other *-exp.y files, for more information.

Add any evaluation routines, if necessary
If you need new opcodes (that represent the operations of the language), add them to the enumerated type in expression.h. Add support code for these operations in the evaluate_subexp function defined in the file eval.c. Add cases for new opcodes in two functions from parse.c: prefixify_subexp and length_of_subexp. These compute the number of exp_elements that a given operation takes up.
Update some existing code
Add an enumerated identifier for your language to the enumerated type enum language in defs.h.

Update the routines in language.c so your language is included. These routines include type predicates and such, which (in some cases) are language dependent. If your language does not appear in the switch statement, an error is reported.

Also included in language.c is the code that updates the variable current_language, and the routines that translate the language_lang enumerated identifier into a printable string.

Update the function _initialize_language to include your language. This function picks the default language upon startup, so is dependent upon which languages that gdb is built for.

Update allocate_symtab in symfile.c and/or symbol-reading code so that the language of each symtab (source file) is set properly. This is used to determine the language to use at each stack frame level. Currently, the language is set based upon the extension of the source file. If the language can be better inferred from the symbol information, please set the language of the symtab in the symbol-reading code.

Add helper code to print_subexp (in expprint.c) to handle any new expression opcodes you have added to expression.h. Also, add the printed representations of your operators to op_print_tab.

Add a place of call
Add a call to lang_parse() and lang_error in parse_exp_1 (defined in parse.c).
Use macros to trim code
The user has the option of building gdb for some or all of the languages. If the user decides to build gdb for the language lang, then every file dependent on language.h will have the macro _LANG_lang defined in it. Use #ifdefs to leave out large routines that the user won't need if he or she is not using your language.

Note that you do not need to do this in your YACC parser, since if gdb is not build for lang, then (the compiled form of your parser) is not linked into gdb at all.

See the file for how gdb is configured for different languages.

Add dependencies in Make sure you update the macro variables such as HFILES and OBJS, otherwise your code may not get linked in, or, worse yet, it may not get tarred into the distribution!

Next: , Previous: Language Support, Up: Top

8 Host Definition

With the advent of Autoconf, it's rarely necessary to have host definition machinery anymore. The following information is provided, mainly, as an historical reference.

8.1 Adding a New Host

gdb's host configuration support normally happens via Autoconf. New host-specific definitions should not be needed. Older hosts gdb still use the host-specific definitions and files listed below, but these mostly exist for historical reasons, and will eventually disappear.

This file once contained both host and native configuration information (see Native Debugging) for the machine xyz. The host configuration information is now handed by Autoconf.

Host configuration information included a definition of XM_FILE=xm-xyz.h and possibly definitions for CC, SYSV_DEFINE, XM_CFLAGS, XM_ADD_FILES, XM_CLIBS, XM_CDEPS, etc.; see

New host only configurations do not need this file.

This file once contained definitions and includes required when hosting gdb on machine xyz. Those definitions and includes are now handled by Autoconf.

New host and native configurations do not need this file.

Maintainer's note: Some hosts continue to use the xm-xyz.h file to define the macros HOST_FLOAT_FORMAT, HOST_DOUBLE_FORMAT and HOST_LONG_DOUBLE_FORMAT. That code also needs to be replaced with either an Autoconf or run-time test.

Generic Host Support Files

There are some “generic” versions of routines that can be used by various systems. These can be customized in various ways by macros defined in your xm-xyz.h file. If these routines work for the xyz host, you can just include the generic file's name (with `.o', not `.c') in XDEPFILES.

Otherwise, if your machine needs custom support routines, you will need to write routines that perform the same functions as the generic file. Put them into xyz-xdep.c, and put xyz-xdep.o into XDEPFILES.

This contains serial line support for Unix systems. This is always included, via the makefile variable SER_HARDWIRE; override this variable in the .mh file to avoid it.
This contains serial line support for 32-bit programs running under DOS, using the DJGPP (a.k.a. GO32) execution environment.

This contains generic TCP support using sockets.

8.2 Host Conditionals

When gdb is configured and compiled, various macros are defined or left undefined, to control compilation based on the attributes of the host system. These macros and their meanings (or if the meaning is not documented here, then one of the source files where they are used is indicated) are:

The default name of gdb's initialization file (normally .gdbinit).
This macro is deprecated.
If your host defines SIGWINCH, you can define this to be the name of a function to be called if SIGWINCH is received.
Define this to expand into code that will define the function named by the expansion of SIGWINCH_HANDLER.
Define this if your system is of a sort that will crash in tgetent if the stack happens not to be longword-aligned when main is called. This is a rare situation, but is known to occur on several different types of systems.
Define this if host files use \r\n rather than \n as a line terminator. This will cause source file listings to omit \r characters when printing and it will allow \r\n line endings of files which are “sourced” by gdb. It must be possible to open files in binary mode using O_BINARY or, for fopen, "rb".
The default value of the prompt string (normally "(gdb) ").
The name of the generic TTY device, defaults to "/dev/tty".
Define this if binary files are opened the same way as text files.
In some cases, use the system call mmap for reading symbol tables. For some machines this allows for sharing and quick updates.
Define this if the host system has termio.h.
Values for host-side constants.
Substitute for isatty, if not available.
This is the longest integer type available on the host. If not defined, it will default to long long or long, depending on CC_HAS_LONG_LONG.
Define this if the host C compiler supports long long. This is set by the configure script.
Define this if the host can handle printing of long long integers via the printf format conversion specifier ll. This is set by the configure script.
Define this if the host C compiler supports long double. This is set by the configure script.
Define this if the host can handle printing of long double float-point numbers via the printf format conversion specifier Lg. This is set by the configure script.
Define this if the host can handle the parsing of long double float-point numbers via the scanf format conversion specifier Lg. This is set by the configure script.
Define this if lseek (n) does not necessarily move to byte number n in the file. This is only used when reading source files. It is normally faster to define CRLF_SOURCE_FILES when possible.
This macro is used as the argument to lseek (or, most commonly, bfd_seek). FIXME, should be replaced by SEEK_SET instead, which is the POSIX equivalent.
If defined, this should be one or more tokens, such as volatile, that can be used in both the declaration and definition of functions to indicate that they never return. The default is already set correctly if compiling with GCC. This will almost never need to be defined.
If defined, this should be one or more tokens, such as __attribute__ ((noreturn)), that can be used in the declarations of functions to indicate that they never return. The default is already set correctly if compiling with GCC. This will almost never need to be defined.
Define these to appropriate value for the system lseek, if not already defined.
This is the signal for stopping gdb. Defaults to SIGTSTP. (Only redefined for the Convex.)
Means that System V (prior to SVR4) include files are in use. (FIXME: This symbol is abused in infrun.c, regex.c, and utils.c for other things, at the moment.)
Define this to help placate lint in some situations.
Define this to override the defaults of __volatile__ or /**/.

Next: , Previous: Host Definition, Up: Top

9 Target Architecture Definition

gdb's target architecture defines what sort of machine-language programs gdb can work with, and how it works with them.

The target architecture object is implemented as the C structure struct gdbarch *. The structure, and its methods, are generated using the Bourne shell script

9.1 Operating System ABI Variant Handling

gdb provides a mechanism for handling variations in OS ABIs. An OS ABI variant may have influence over any number of variables in the target architecture definition. There are two major components in the OS ABI mechanism: sniffers and handlers.

A sniffer examines a file matching a BFD architecture/flavour pair (the architecture may be wildcarded) in an attempt to determine the OS ABI of that file. Sniffers with a wildcarded architecture are considered to be generic, while sniffers for a specific architecture are considered to be specific. A match from a specific sniffer overrides a match from a generic sniffer. Multiple sniffers for an architecture/flavour may exist, in order to differentiate between two different operating systems which use the same basic file format. The OS ABI framework provides a generic sniffer for ELF-format files which examines the EI_OSABI field of the ELF header, as well as note sections known to be used by several operating systems.

A handler is used to fine-tune the gdbarch structure for the selected OS ABI. There may be only one handler for a given OS ABI for each BFD architecture.

The following OS ABI variants are defined in osabi.h:

The ABI of the inferior is unknown. The default gdbarch settings for the architecture will be used.

UNIX System V Release 4

GNU using the Hurd kernel

Sun Solaris

OSF/1, including Digital UNIX and Compaq Tru64 UNIX

GNU using the Linux kernel

FreeBSD using the a.out executable format

FreeBSD using the ELF executable format

NetBSD using the a.out executable format

NetBSD using the ELF executable format

Windows CE


Novell NetWare

ARM Embedded ABI version 1

ARM Embedded ABI version 2

Generic ARM Procedure Call Standard

Here are the functions that make up the OS ABI framework:

— Function: const char *gdbarch_osabi_name (enum gdb_osabi osabi)

Return the name of the OS ABI corresponding to osabi.

— Function: void gdbarch_register_osabi (enum bfd_architecture arch, unsigned long machine, enum gdb_osabi osabi, void (*init_osabi)(struct gdbarch_info info, struct gdbarch *gdbarch))

Register the OS ABI handler specified by init_osabi for the architecture, machine type and OS ABI specified by arch, machine and osabi. In most cases, a value of zero for the machine type, which implies the architecture's default machine type, will suffice.

— Function: void gdbarch_register_osabi_sniffer (enum bfd_architecture arch, enum bfd_flavour flavour, enum gdb_osabi (*sniffer)(bfd *abfd))

Register the OS ABI file sniffer specified by sniffer for the BFD architecture/flavour pair specified by arch and flavour. If arch is bfd_arch_unknown, the sniffer is considered to be generic, and is allowed to examine flavour-flavoured files for any architecture.

— Function: enum gdb_osabi gdbarch_lookup_osabi (bfd *abfd)

Examine the file described by abfd to determine its OS ABI. The value GDB_OSABI_UNKNOWN is returned if the OS ABI cannot be determined.

— Function: void gdbarch_init_osabi (struct gdbarch info info, struct gdbarch *gdbarch, enum gdb_osabi osabi)

Invoke the OS ABI handler corresponding to osabi to fine-tune the gdbarch structure specified by gdbarch. If a handler corresponding to osabi has not been registered for gdbarch's architecture, a warning will be issued and the debugging session will continue with the defaults already established for gdbarch.

9.2 Registers and Memory

gdb's model of the target machine is rather simple. gdb assumes the machine includes a bank of registers and a block of memory. Each register may have a different size.

gdb does not have a magical way to match up with the compiler's idea of which registers are which; however, it is critical that they do match up accurately. The only way to make this work is to get accurate information about the order that the compiler uses, and to reflect that in the REGISTER_NAME and related macros.

gdb can handle big-endian, little-endian, and bi-endian architectures.

9.3 Pointers Are Not Always Addresses

On almost all 32-bit architectures, the representation of a pointer is indistinguishable from the representation of some fixed-length number whose value is the byte address of the object pointed to. On such machines, the words “pointer” and “address” can be used interchangeably. However, architectures with smaller word sizes are often cramped for address space, so they may choose a pointer representation that breaks this identity, and allows a larger code address space.

For example, the Renesas D10V is a 16-bit VLIW processor whose instructions are 32 bits long3. If the D10V used ordinary byte addresses to refer to code locations, then the processor would only be able to address 64kb of instructions. However, since instructions must be aligned on four-byte boundaries, the low two bits of any valid instruction's byte address are always zero—byte addresses waste two bits. So instead of byte addresses, the D10V uses word addresses—byte addresses shifted right two bits—to refer to code. Thus, the D10V can use 16-bit words to address 256kb of code space.

However, this means that code pointers and data pointers have different forms on the D10V. The 16-bit word 0xC020 refers to byte address 0xC020 when used as a data address, but refers to byte address 0x30080 when used as a code address.

(The D10V also uses separate code and data address spaces, which also affects the correspondence between pointers and addresses, but we're going to ignore that here; this example is already too long.)

To cope with architectures like this—the D10V is not the only one!—gdb tries to distinguish between addresses, which are byte numbers, and pointers, which are the target's representation of an address of a particular type of data. In the example above, 0xC020 is the pointer, which refers to one of the addresses 0xC020 or 0x30080, depending on the type imposed upon it. gdb provides functions for turning a pointer into an address and vice versa, in the appropriate way for the current architecture.

Unfortunately, since addresses and pointers are identical on almost all processors, this distinction tends to bit-rot pretty quickly. Thus, each time you port gdb to an architecture which does distinguish between pointers and addresses, you'll probably need to clean up some architecture-independent code.

Here are functions which convert between pointers and addresses:

— Function: CORE_ADDR extract_typed_address (void *buf, struct type *type)

Treat the bytes at buf as a pointer or reference of type type, and return the address it represents, in a manner appropriate for the current architecture. This yields an address gdb can use to read target memory, disassemble, etc. Note that buf refers to a buffer in gdb's memory, not the inferior's.

For example, if the current architecture is the Intel x86, this function extracts a little-endian integer of the appropriate length from buf and returns it. However, if the current architecture is the D10V, this function will return a 16-bit integer extracted from buf, multiplied by four if type is a pointer to a function.

If type is not a pointer or reference type, then this function will signal an internal error.

— Function: CORE_ADDR store_typed_address (void *buf, struct type *type, CORE_ADDR addr)

Store the address addr in buf, in the proper format for a pointer of type type in the current architecture. Note that buf refers to a buffer in gdb's memory, not the inferior's.

For example, if the current architecture is the Intel x86, this function stores addr unmodified as a little-endian integer of the appropriate length in buf. However, if the current architecture is the D10V, this function divides addr by four if type is a pointer to a function, and then stores it in buf.

If type is not a pointer or reference type, then this function will signal an internal error.

— Function: CORE_ADDR value_as_address (struct value *val)

Assuming that val is a pointer, return the address it represents, as appropriate for the current architecture.

This function actually works on integral values, as well as pointers. For pointers, it performs architecture-specific conversions as described above for extract_typed_address.

— Function: CORE_ADDR value_from_pointer (struct type *type, CORE_ADDR addr)

Create and return a value representing a pointer of type type to the address addr, as appropriate for the current architecture. This function performs architecture-specific conversions as described above for store_typed_address.

Here are some macros which architectures can define to indicate the relationship between pointers and addresses. These have default definitions, appropriate for architectures on which all pointers are simple unsigned byte addresses.

— Target Macro: CORE_ADDR POINTER_TO_ADDRESS (struct type *type, char *buf)

Assume that buf holds a pointer of type type, in the appropriate format for the current architecture. Return the byte address the pointer refers to.

This function may safely assume that type is either a pointer or a C++ reference type.

— Target Macro: void ADDRESS_TO_POINTER (struct type *type, char *buf, CORE_ADDR addr)

Store in buf a pointer of type type representing the address addr, in the appropriate format for the current architecture.

This function may safely assume that type is either a pointer or a C++ reference type.

9.4 Address Classes

Sometimes information about different kinds of addresses is available via the debug information. For example, some programming environments define addresses of several different sizes. If the debug information distinguishes these kinds of address classes through either the size info (e.g, DW_AT_byte_size in DWARF 2) or through an explicit address class attribute (e.g, DW_AT_address_class in DWARF 2), the following macros should be defined in order to disambiguate these types within gdb as well as provide the added information to a gdb user when printing type expressions.

— Target Macro: int ADDRESS_CLASS_TYPE_FLAGS (int byte_size, int dwarf2_addr_class)

Returns the type flags needed to construct a pointer type whose size is byte_size and whose address class is dwarf2_addr_class. This function is normally called from within a symbol reader. See dwarf2read.c.

— Target Macro: char *ADDRESS_CLASS_TYPE_FLAGS_TO_NAME (int type_flags)

Given the type flags representing an address class qualifier, return its name.

— Target Macro: int ADDRESS_CLASS_NAME_to_TYPE_FLAGS (int name, int *vartype_flags_ptr)

Given an address qualifier name, set the int refererenced by type_flags_ptr to the type flags for that address class qualifier.

Since the need for address classes is rather rare, none of the address class macros defined by default. Predicate macros are provided to detect when they are defined.

Consider a hypothetical architecture in which addresses are normally 32-bits wide, but 16-bit addresses are also supported. Furthermore, suppose that the DWARF 2 information for this architecture simply uses a DW_AT_byte_size value of 2 to indicate the use of one of these "short" pointers. The following functions could be defined to implement the address class macros:

     somearch_address_class_type_flags (int byte_size,
                                        int dwarf2_addr_class)
       if (byte_size == 2)
         return TYPE_FLAG_ADDRESS_CLASS_1;
         return 0;
     static char *
     somearch_address_class_type_flags_to_name (int type_flags)
       if (type_flags & TYPE_FLAG_ADDRESS_CLASS_1)
         return "short";
         return NULL;
     somearch_address_class_name_to_type_flags (char *name,
                                                int *type_flags_ptr)
       if (strcmp (name, "short") == 0)
           *type_flags_ptr = TYPE_FLAG_ADDRESS_CLASS_1;
           return 1;
         return 0;

The qualifier @short is used in gdb's type expressions to indicate the presence of one of these "short" pointers. E.g, if the debug information indicates that short_ptr_var is one of these short pointers, gdb might show the following behavior:

     (gdb) ptype short_ptr_var
     type = int * @short

9.5 Raw and Virtual Register Representations

Maintainer note: This section is pretty much obsolete. The functionality described here has largely been replaced by pseudo-registers and the mechanisms described in Using Different Register and Memory Data Representations. See also Bug Tracking Database and ARI Index for more up-to-date information.

Some architectures use one representation for a value when it lives in a register, but use a different representation when it lives in memory. In gdb's terminology, the raw representation is the one used in the target registers, and the virtual representation is the one used in memory, and within gdb struct value objects.

Maintainer note: Notice that the same mechanism is being used to both convert a register to a struct value and alternative register forms.

For almost all data types on almost all architectures, the virtual and raw representations are identical, and no special handling is needed. However, they do occasionally differ. For example:

In general, the raw representation is determined by the architecture, or gdb's interface to the architecture, while the virtual representation can be chosen for gdb's convenience. gdb's register file, registers, holds the register contents in raw format, and the gdb remote protocol transmits register values in raw format.

Your architecture may define the following macros to request conversions between the raw and virtual format:

— Target Macro: int REGISTER_CONVERTIBLE (int reg)

Return non-zero if register number reg's value needs different raw and virtual formats.

You should not use REGISTER_CONVERT_TO_VIRTUAL for a register unless this macro returns a non-zero value for that register.

— Target Macro: int DEPRECATED_REGISTER_RAW_SIZE (int reg)

The size of register number reg's raw value. This is the number of bytes the register will occupy in registers, or in a gdb remote protocol packet.

— Target Macro: int DEPRECATED_REGISTER_VIRTUAL_SIZE (int reg)

The size of register number reg's value, in its virtual format. This is the size a struct value's buffer will have, holding that register's value.

— Target Macro: struct type *DEPRECATED_REGISTER_VIRTUAL_TYPE (int reg)

This is the type of the virtual representation of register number reg. Note that there is no need for a macro giving a type for the register's raw form; once the register's value has been obtained, gdb always uses the virtual form.

— Target Macro: void REGISTER_CONVERT_TO_VIRTUAL (int reg, struct type *type, char *from, char *to)

Convert the value of register number reg to type, which should always be DEPRECATED_REGISTER_VIRTUAL_TYPE (reg). The buffer at from holds the register's value in raw format; the macro should convert the value to virtual format, and place it at to.

Note that REGISTER_CONVERT_TO_VIRTUAL and REGISTER_CONVERT_TO_RAW take their reg and type arguments in different orders.

You should only use REGISTER_CONVERT_TO_VIRTUAL with registers for which the REGISTER_CONVERTIBLE macro returns a non-zero value.

— Target Macro: void REGISTER_CONVERT_TO_RAW (struct type *type, int reg, char *from, char *to)

Convert the value of register number reg to type, which should always be DEPRECATED_REGISTER_VIRTUAL_TYPE (reg). The buffer at from holds the register's value in raw format; the macro should convert the value to virtual format, and place it at to.

Note that REGISTER_CONVERT_TO_VIRTUAL and REGISTER_CONVERT_TO_RAW take their reg and type arguments in different orders.

9.6 Using Different Register and Memory Data Representations

Maintainer's note: The way GDB manipulates registers is undergoing significant change. Many of the macros and functions refered to in this section are likely to be subject to further revision. See A.R. Index and Bug Tracking Database for further information. cagney/2002-05-06.

Some architectures can represent a data object in a register using a form that is different to the objects more normal memory representation. For example:

In general, the register representation of a data type is determined by the architecture, or gdb's interface to the architecture, while the memory representation is determined by the Application Binary Interface.

For almost all data types on almost all architectures, the two representations are identical, and no special handling is needed. However, they do occasionally differ. Your architecture may define the following macros to request conversions between the register and memory representations of a data type:

— Target Macro: int CONVERT_REGISTER_P (int reg)

Return non-zero if the representation of a data value stored in this register may be different to the representation of that same data value when stored in memory.

When non-zero, the macros REGISTER_TO_VALUE and VALUE_TO_REGISTER are used to perform any necessary conversion.

— Target Macro: void REGISTER_TO_VALUE (int reg, struct type *type, char *from, char *to)

Convert the value of register number reg to a data object of type type. The buffer at from holds the register's value in raw format; the converted value should be placed in the buffer at to.

Note that REGISTER_TO_VALUE and VALUE_TO_REGISTER take their reg and type arguments in different orders.

You should only use REGISTER_TO_VALUE with registers for which the CONVERT_REGISTER_P macro returns a non-zero value.

— Target Macro: void VALUE_TO_REGISTER (struct type *type, int reg, char *from, char *to)

Convert a data value of type type to register number reg' raw format.

Note that REGISTER_TO_VALUE and VALUE_TO_REGISTER take their reg and type arguments in different orders.

You should only use VALUE_TO_REGISTER with registers for which the CONVERT_REGISTER_P macro returns a non-zero value.

— Target Macro: void REGISTER_CONVERT_TO_TYPE (int regnum, struct type *type, char *buf)

See mips-tdep.c. It does not do what you want.

9.7 Frame Interpretation

9.8 Inferior Call Setup

9.9 Compiler Characteristics

9.10 Target Conditionals

This section describes the macros that you can use to define the target machine.

If a raw machine instruction address includes any bits that are not really part of the address, then define this macro to expand into an expression that zeroes those bits in addr. This is only used for addresses of instructions, and even then not in all contexts.

For example, the two low-order bits of the PC on the Hewlett-Packard PA 2.0 architecture contain the privilege level of the corresponding instruction. Since instructions must always be aligned on four-byte boundaries, the processor masks out these bits to generate the actual address of the instruction. ADDR_BITS_REMOVE should filter out these bits with an expression such as ((addr) & ~3).

ADDRESS_CLASS_NAME_TO_TYPE_FLAGS (name, type_flags_ptr)
If name is a valid address class qualifier name, set the int referenced by type_flags_ptr to the mask representing the qualifier and return 1. If name is not a valid address class qualifier name, return 0.

The value for type_flags_ptr should be one of TYPE_FLAG_ADDRESS_CLASS_1, TYPE_FLAG_ADDRESS_CLASS_2, or possibly some combination of these values or'd together. See Address Classes.

Predicate which indicates whether ADDRESS_CLASS_NAME_TO_TYPE_FLAGS has been defined.
ADDRESS_CLASS_TYPE_FLAGS (byte_size, dwarf2_addr_class)
Given a pointers byte size (as described by the debug information) and the possible DW_AT_address_class value, return the type flags used by gdb to represent this address class. The value returned should be one of TYPE_FLAG_ADDRESS_CLASS_1, TYPE_FLAG_ADDRESS_CLASS_2, or possibly some combination of these values or'd together. See Address Classes.
Predicate which indicates whether ADDRESS_CLASS_TYPE_FLAGS has been defined.
Return the name of the address class qualifier associated with the type flags given by type_flags.
Predicate which indicates whether ADDRESS_CLASS_TYPE_FLAGS_TO_NAME has been defined. See Address Classes.
ADDRESS_TO_POINTER (type, buf, addr)
Store in buf a pointer of type type representing the address addr, in the appropriate format for the current architecture. This macro may safely assume that type is either a pointer or a C++ reference type. See Pointers Are Not Always Addresses.
Define if the compiler promotes a short or char parameter to an int, but still reports the parameter as its original type, rather than the promoted type.
Define this if the numbering of bits in the targets does not match the endianness of the target byte order. A value of 1 means that the bits are numbered in a big-endian bit order, 0 means little-endian.
This is the character array initializer for the bit pattern to put into memory where a breakpoint is set. Although it's common to use a trap instruction for a breakpoint, it's not required; for instance, the bit pattern could be an invalid instruction. The breakpoint must be no longer than the shortest instruction of the architecture.

BREAKPOINT has been deprecated in favor of BREAKPOINT_FROM_PC.

Similar to BREAKPOINT, but used for bi-endian targets.


Specify the breakpoint instruction sequence for a remote target. DEPRECATED_REMOTE_BREAKPOINT, DEPRECATED_BIG_REMOTE_BREAKPOINT and DEPRECATED_LITTLE_REMOTE_BREAKPOINT have been deprecated in favor of BREAKPOINT_FROM_PC (see BREAKPOINT_FROM_PC).
BREAKPOINT_FROM_PC (pcptr, lenptr)
Use the program counter to determine the contents and size of a breakpoint instruction. It returns a pointer to a string of bytes that encode a breakpoint instruction, stores the length of the string to *lenptr, and adjusts the program counter (if necessary) to point to the actual memory location where the breakpoint should be inserted.

Although it is common to use a trap instruction for a breakpoint, it's not required; for instance, the bit pattern could be an invalid instruction. The breakpoint must be no longer than the shortest instruction of the architecture.

Replaces all the other BREAKPOINT macros.

MEMORY_INSERT_BREAKPOINT (addr, contents_cache)
MEMORY_REMOVE_BREAKPOINT (addr, contents_cache)
Insert or remove memory based breakpoints. Reasonable defaults (default_memory_insert_breakpoint and default_memory_remove_breakpoint respectively) have been provided so that it is not necessary to define these for most architectures. Architectures which may want to define MEMORY_INSERT_BREAKPOINT and MEMORY_REMOVE_BREAKPOINT will likely have instructions that are oddly sized or are not stored in a conventional manner.

It may also be desirable (from an efficiency standpoint) to define custom breakpoint insertion and removal routines if BREAKPOINT_FROM_PC needs to read the target's memory for some reason.

Given an address at which a breakpoint is desired, return a breakpoint address adjusted to account for architectural constraints on breakpoint placement. This method is not needed by most targets.

The FR-V target (see frv-tdep.c) requires this method. The FR-V is a VLIW architecture in which a number of RISC-like instructions are grouped (packed) together into an aggregate instruction or instruction bundle. When the processor executes one of these bundles, the component instructions are executed in parallel.

In the course of optimization, the compiler may group instructions from distinct source statements into the same bundle. The line number information associated with one of the latter statements will likely refer to some instruction other than the first one in the bundle. So, if the user attempts to place a breakpoint on one of these latter statements, gdb must be careful to not place the break instruction on any instruction other than the first one in the bundle. (Remember though that the instructions within a bundle execute in parallel, so the first instruction is the instruction at the lowest address and has nothing to do with execution order.)

The FR-V's ADJUST_BREAKPOINT_ADDRESS method will adjust a breakpoint's address by scanning backwards for the beginning of the bundle, returning the address of the bundle.

Since the adjustment of a breakpoint may significantly alter a user's expectation, gdb prints a warning when an adjusted breakpoint is initially set and each time that that breakpoint is hit.

See the file inferior.h.

This method has been replaced by push_dummy_code (see push_dummy_code).

A C expression that should be nonzero if regno cannot be fetched from an inferior process. This is only relevant if FETCH_INFERIOR_REGISTERS is not defined.
A C expression that should be nonzero if regno should not be written to the target. This is often the case for program counters, status words, and other special registers. If this is not defined, gdb will assume that all registers may be written.
Return non-zero if register regnum can represent data values in a non-standard form. See Using Different Register and Memory Data Representations.
Define this to be the amount by which to decrement the PC after the program encounters a breakpoint. This is often the number of bytes in BREAKPOINT, though not always. For most targets this value will be 0.
If defined, this should evaluate to 1 if addr is in a shared library in which breakpoints cannot be set and so should be disabled.
If defined, then the `info float' command will print information about the processor's floating point unit.
print_registers_info (gdbarch, frame, regnum, all)
If defined, pretty print the value of the register regnum for the specified frame. If the value of regnum is -1, pretty print either all registers (all is non zero) or a select subset of registers (all is zero).

The default method prints one register per line, and if all is zero omits floating-point registers.

If defined, then the `info vector' command will call this function to print information about the processor's vector unit.

By default, the `info vector' command will print all vector registers (the register's type having the vector attribute).

Convert DWARF register number into gdb regnum. If not defined, no conversion will be performed.
Convert DWARF2 register number into gdb regnum. If not defined, no conversion will be performed.
Convert ECOFF register number into gdb regnum. If not defined, no conversion will be performed.
This is an expression that should designate the end of the text section.
EXTRACT_RETURN_VALUE(type, regbuf, valbuf)
Define this to extract a function's return value of type type from the raw register state regbuf and copy that, in virtual format, into valbuf.

This method has been deprecated in favour of gdbarch_return_value (see gdbarch_return_value).

When defined, extract from the array regbuf (containing the raw register state) the CORE_ADDR at which a function should return its structure value.

See gdbarch_return_value.

If the virtual frame pointer is kept in a register, then define this macro to be the number (greater than or equal to zero) of that register.

This should only need to be defined if DEPRECATED_TARGET_READ_FP is not defined.

Define this to an expression that returns 1 if the function invocation represented by fi does not have a stack frame associated with it. Otherwise return 0.
frame_align (address)
Define this to adjust address so that it meets the alignment requirements for the start of a new stack frame. A stack frame's alignment requirements are typically stronger than a target processors stack alignment requirements (see DEPRECATED_STACK_ALIGN).

This function is used to ensure that, when creating a dummy frame, both the initial stack pointer and (if needed) the address of the return value are correctly aligned.

Unlike DEPRECATED_STACK_ALIGN, this function always adjusts the address in the direction of stack growth.

By default, no frame based stack alignment is performed.

int frame_red_zone_size
The number of bytes, beyond the innermost-stack-address, reserved by the abi. A function is permitted to use this scratch area (instead of allocating extra stack space).

When performing an inferior function call, to ensure that it does not modify this area, gdb adjusts the innermost-stack-address by frame_red_zone_size bytes before pushing parameters onto the stack.

By default, zero bytes are allocated. The value must be aligned (see frame_align).

The amd64 (nee x86-64) abi documentation refers to the red zone when describing this scratch area.

Given frame, return a pointer to the calling frame.
Define this to be an expression that returns zero if the given frame is an outermost frame, with no caller, and nonzero otherwise. Most normal situations can be handled without defining this macro, including NULL chain pointers, dummy frames, and frames whose PC values are inside the startup file (e.g. crt0.o), inside main, or inside _start.
See frame.h. Determines the address of all registers in the current stack frame storing each in frame->saved_regs. Space for frame->saved_regs shall be allocated by DEPRECATED_FRAME_INIT_SAVED_REGS using frame_saved_regs_zalloc.

FRAME_FIND_SAVED_REGS is deprecated.

For the frame described by fi return the number of arguments that are being passed. If the number of arguments is not known, return -1.
Given frame, return the pc saved there. This is the return address.

This method is deprecated. See unwind_pc.

CORE_ADDR unwind_pc (struct frame_info *this_frame)
Return the instruction address, in this_frame's caller, at which execution will resume after this_frame returns. This is commonly refered to as the return address.

The implementation, which must be frame agnostic (work with any frame), is typically no more than:

          ULONGEST pc;
          frame_unwind_unsigned_register (this_frame, D10V_PC_REGNUM, &pc);
          return d10v_make_iaddr (pc);

See DEPRECATED_FRAME_SAVED_PC, which this method replaces.

CORE_ADDR unwind_sp (struct frame_info *this_frame)
Return the frame's inner most stack address. This is commonly refered to as the frame's stack pointer.

The implementation, which must be frame agnostic (work with any frame), is typically no more than:

          ULONGEST sp;
          frame_unwind_unsigned_register (this_frame, D10V_SP_REGNUM, &sp);
          return d10v_make_daddr (sp);

See TARGET_READ_SP, which this method replaces.

For some COFF targets, the x_sym.x_misc.x_fsize field of the function end symbol is 0. For such targets, you must define FUNCTION_EPILOGUE_SIZE to expand into the standard size of a function's epilogue.
An integer, giving the offset in bytes from a function's address (as used in the values of symbols, function pointers, etc.), and the function's first genuine instruction.

This is zero on almost all machines: the function's address is usually the address of its first instruction. However, on the VAX, for example, each function starts with two bytes containing a bitmask indicating which registers to save upon entry to the function. The VAX call instructions check this value, and save the appropriate registers automatically. Thus, since the offset from the function's address to its first instruction is two bytes, DEPRECATED_FUNCTION_START_OFFSET would be 2 on the VAX.

If defined, these are the names of the symbols that gdb will look for to detect that GCC compiled the file. The default symbols are gcc_compiled. and gcc2_compiled., respectively. (Currently only defined for the Delta 68.)
If defined and non-zero, enables support for multiple architectures within gdb.

This support can be enabled at two levels. At level one, only definitions for previously undefined macros are provided; at level two, a multi-arch definition of all architecture dependent macros will be defined.

This determines whether horrible kludge code in dbxread.c and partial-stab.h is used to mangle multiple-symbol-table files from HPPA's. This should all be ripped out, and a scheme like elfread.c used instead.
For most machines, this is a target-dependent parameter. On the DECstation and the Iris, this is a native-dependent parameter, since the header file setjmp.h is needed to define it.

This macro determines the target PC address that longjmp will jump to, assuming that we have just stopped at a longjmp breakpoint. It takes a CORE_ADDR * as argument, and stores the target PC value through this pointer. It examines the current state of the machine as needed.

Define this if you need to supply your own definition for the function DEPRECATED_GET_SAVED_REGISTER.
Shows that we are configured for an IBM RS/6000 system. This conditional should be eliminated (FIXME) and replaced by feature-specific macros. It was introduced in a haste and we are repenting at leisure.
An x86-based target can define this to use the generic x86 watchpoint support; see I386_USE_GENERIC_WATCHPOINTS.
Some systems have routines whose names start with `$'. Giving this macro a non-zero value tells gdb's expression parser to check for such routines when parsing tokens that begin with `$'.

On HP-UX, certain system routines (millicode) have names beginning with `$' or `$$'. For example, $$dyncall is a millicode routine that handles inter-space procedure calls on PA-RISC.

If additional information about the frame is required this should be stored in frame->extra_info. Space for frame->extra_info is allocated using frame_extra_info_zalloc.
This is a C statement that sets the pc of the frame pointed to by prev. [By default...]
INNER_THAN (lhs, rhs)
Returns non-zero if stack address lhs is inner than (nearer to the stack top) stack address rhs. Define this as lhs < rhs if the target's stack grows downward in memory, or lhs > rsh if the stack grows upward.
gdbarch_in_function_epilogue_p (gdbarch, pc)
Returns non-zero if the given pc is in the epilogue of a function. The epilogue of a function is defined as the part of a function where the stack frame of the function already has been destroyed up to the final `return from function call' instruction.
Define these to be the start and end address of the sigtramp for the given pc. On machines where the address is just a compile time constant, the macro expansion will typically just ignore the supplied pc.
Define this to evaluate to nonzero if the program is stopped in the trampoline that connects to a shared library.
Define this to evaluate to nonzero if the program is stopped in the trampoline that returns from a shared library.
Define this to evaluate to nonzero if the program is stopped in the dynamic linker.
Define this to evaluate to the (nonzero) address at which execution should continue to get past the dynamic linker's symbol resolution function. A zero value indicates that it is not important or necessary to set a breakpoint to get through the dynamic linker and that single stepping will suffice.
Define this when the architecture needs to handle non-pointer to address conversions specially. Converts that value to an address according to the current architectures conventions.

Pragmatics: When the user copies a well defined expression from their source code and passes it, as a parameter, to gdb's print command, they should get the same value as would have been computed by the target program. Any deviation from this rule can cause major confusion and annoyance, and needs to be justified carefully. In other words, gdb doesn't really have the freedom to do these conversions in clever and useful ways. It has, however, been pointed out that users aren't complaining about how gdb casts integers to pointers; they are complaining that they can't take an address from a disassembly listing and give it to x/i. Adding an architecture method like INTEGER_TO_ADDRESS certainly makes it possible for gdb to “get it right” in all circumstances.

See Pointers Are Not Always Addresses.

(Specific to the a29k.)
Assume that buf holds a pointer of type type, in the appropriate format for the current architecture. Return the byte address the pointer refers to. See Pointers Are Not Always Addresses.
Return non-zero if reg uses different raw and virtual formats. See Raw and Virtual Register Representations.
REGISTER_TO_VALUE(regnum, type, from, to)
Convert the raw contents of register regnum into a value of type type. See Using Different Register and Memory Data Representations.
Return the raw size of reg; defaults to the size of the register's virtual type. See Raw and Virtual Register Representations.
register_reggroup_p (gdbarch, regnum, reggroup)
Return non-zero if register regnum is a member of the register group reggroup.

By default, registers are grouped as follows:

Any register with a valid name and a floating-point type.
Any register with a valid name and a vector type.
Any register with a valid name and a type other than vector or floating-point. `float_reggroup'.
Any register with a valid name.

Return the virtual size of reg; defaults to the size of the register's virtual type. Return the virtual size of reg. See Raw and Virtual Register Representations.
Return the virtual type of reg. See Raw and Virtual Register Representations.
struct type *register_type (gdbarch, reg)
If defined, return the type of register reg. This function superseeds DEPRECATED_REGISTER_VIRTUAL_TYPE. See Raw and Virtual Register Representations.
REGISTER_CONVERT_TO_VIRTUAL(reg, type, from, to)
Convert the value of register reg from its raw form to its virtual form. See Raw and Virtual Register Representations.
REGISTER_CONVERT_TO_RAW(type, reg, from, to)
Convert the value of register reg from its virtual form to its raw form. See Raw and Virtual Register Representations.
const struct regset *regset_from_core_section (struct gdbarch * gdbarch, const char * sect_name, size_t sect_size)
Return the appropriate register set for a core file section with name sect_name and size sect_size.
Define this as 1 if the target does not have a hardware single-step mechanism. The macro SOFTWARE_SINGLE_STEP must also be defined.
SOFTWARE_SINGLE_STEP(signal, insert_breapoints_p)
A function that inserts or removes (depending on insert_breapoints_p) breakpoints at each possible destinations of the next instruction. See sparc-tdep.c and rs6000-tdep.c for examples.
Somebody clever observed that, the more actual addresses you have in the debug information, the more time the linker has to spend relocating them. So whenever there's some other way the debugger could find the address it needs, you should omit it from the debug info, to make linking faster.

SOFUN_ADDRESS_MAYBE_MISSING indicates that a particular set of hacks of this sort are in use, affecting N_SO and N_FUN entries in stabs-format debugging information. N_SO stabs mark the beginning and ending addresses of compilation units in the text segment. N_FUN stabs mark the starts and ends of functions.


If defined, print information about the load segment for the program counter. (Defined only for the RS/6000.)
If the program counter is kept in a register, then define this macro to be the number (greater than or equal to zero) of that register.

This should only need to be defined if TARGET_READ_PC and TARGET_WRITE_PC are not defined.

If non-zero, round arguments to a boundary of this many bits before pushing them on the stack.
stabs_argument_has_addr (gdbarch, type)
Define this to return nonzero if a function argument of type type is passed by reference instead of value.


A hook defined for XCOFF reading.
(Only used in unsupported Convex configuration.)
If defined, this is the number of the processor status register. (This definition is only used in generic code when parsing "$ps".)
If defined, used by frame_pop to remove a stack frame. This method has been superseeded by generic code.
push_dummy_call (gdbarch, function, regcache, pc_addr, nargs, args, sp, struct_return, struct_addr)
Define this to push the dummy frame's call to the inferior function onto the stack. In addition to pushing nargs, the code should push struct_addr (when struct_return), and the return address (bp_addr).

function is a pointer to a struct value; on architectures that use function descriptors, this contains the function descriptor value.

Returns the updated top-of-stack pointer.


CORE_ADDR push_dummy_code (gdbarch, sp, funaddr, using_gcc, args, nargs, value_type, real_pc, bp_addr)
Given a stack based call dummy, push the instruction sequence (including space for a breakpoint) to which the called function should return.

Set bp_addr to the address at which the breakpoint instruction should be inserted, real_pc to the resume address when starting the call sequence, and return the updated inner-most stack address.

By default, the stack is grown sufficient to hold a frame-aligned (see frame_align) breakpoint, bp_addr is set to the address reserved for that breakpoint, and real_pc set to funaddr.


Return the name of register i as a string. May return NULL or NUL to indicate that register i is not valid.
Define this to return 1 if the given type will be passed by pointer rather than directly.

This method has been replaced by stabs_argument_has_addr (see stabs_argument_has_addr).

Used in `call_function_by_hand' to notify the target dependent code of the top-of-stack value that will be passed to the the inferior code. This is the value of the SP after both the dummy frame and space for parameters/results have been allocated on the stack. See unwind_dummy_id.
Define this to convert sdb register numbers into gdb regnums. If not defined, no conversion will be done.
enum return_value_convention gdbarch_return_value (struct gdbarch *gdbarch, struct type *valtype, struct regcache *regcache, void *readbuf, const void *writebuf)
Given a function with a return-value of type rettype, return which return-value convention that function would use.

gdb currently recognizes two function return-value conventions: RETURN_VALUE_REGISTER_CONVENTION where the return value is found in registers; and RETURN_VALUE_STRUCT_CONVENTION where the return value is found in memory and the address of that memory location is passed in as the function's first parameter.

If the register convention is being used, and writebuf is non-NULL, also copy the return-value in writebuf into regcache.

If the register convention is being used, and readbuf is non-NULL, also copy the return value from regcache into readbuf (regcache contains a copy of the registers from the just returned function).

See DEPRECATED_EXTRACT_STRUCT_VALUE_ADDRESS, for a description of how return-values that use the struct convention are handled.

Maintainer note: This method replaces separate predicate, extract, store methods. By having only one method, the logic needed to determine the return-value convention need only be implemented in one place. If gdb were written in an oo language, this method would instead return an object that knew how to perform the register return-value extract and store.

Maintainer note: This method does not take a gcc_p parameter, and such a parameter should not be added. If an architecture that requires per-compiler or per-function information be identified, then the replacement of rettype with struct value function should be persued.

Maintainer note: The regcache parameter limits this methods to the inner most frame. While replacing regcache with a struct frame_info frame parameter would remove that limitation there has yet to be a demonstrated need for such a change.

Advance the inferior's PC past a permanent breakpoint. gdb normally steps over a breakpoint by removing it, stepping one instruction, and re-inserting the breakpoint. However, permanent breakpoints are hardwired into the inferior, and can't be removed, so this strategy doesn't work. Calling SKIP_PERMANENT_BREAKPOINT adjusts the processor's state so that execution will resume just after the breakpoint. This macro does the right thing even when the breakpoint is in the delay slot of a branch or jump.
A C expression that returns the address of the “real” code beyond the function entry prologue found at pc.
If the target machine has trampoline code that sits between callers and the functions being called, then define this macro to return a new PC that is at the start of the real function.
If the stack-pointer is kept in a register, then define this macro to be the number (greater than or equal to zero) of that register, or -1 if there is no such register.
Define this to convert stab register numbers (as gotten from `r' declarations) into gdb regnums. If not defined, no conversion will be done.
Define this to increase addr so that it meets the alignment requirements for the processor's stack.

Unlike frame_align, this function always adjusts addr upwards.

By default, no stack alignment is performed.

Define this to return true if the address is of an instruction with a delay slot. If a breakpoint has been placed in the instruction's delay slot, gdb will single-step over that instruction before resuming normally. Currently only defined for the Mips.
STORE_RETURN_VALUE (type, regcache, valbuf)
A C expression that writes the function return value, found in valbuf, into the regcache. type is the type of the value that is to be returned.

This method has been deprecated in favour of gdbarch_return_value (see gdbarch_return_value).

The default value of the “symbol-reloading” variable. (Never defined in current sources.)
Number of bits in a char; defaults to 8.
Non-zero if char is normally signed on this architecture; zero if it should be unsigned.

The ISO C standard requires the compiler to treat char as equivalent to either signed char or unsigned char; any character in the standard execution set is supposed to be positive. Most compilers treat char as signed, but char is unsigned on the IBM S/390, RS6000, and PowerPC targets.

Number of bits in a complex number; defaults to 2 * TARGET_FLOAT_BIT.

At present this macro is not used.

Number of bits in a double float; defaults to 8 * TARGET_CHAR_BIT.
Number of bits in a double complex; defaults to 2 * TARGET_DOUBLE_BIT.

At present this macro is not used.

Number of bits in a float; defaults to 4 * TARGET_CHAR_BIT.
Number of bits in an integer; defaults to 4 * TARGET_CHAR_BIT.
Number of bits in a long integer; defaults to 4 * TARGET_CHAR_BIT.
Number of bits in a long double float; defaults to 2 * TARGET_DOUBLE_BIT.
Number of bits in a long long integer; defaults to 2 * TARGET_LONG_BIT.
Number of bits in a pointer; defaults to TARGET_INT_BIT.
Number of bits in a short integer; defaults to 2 * TARGET_CHAR_BIT.
TARGET_WRITE_PC (val, pid)
These change the behavior of read_pc, write_pc, and read_sp. For most targets, these may be left undefined. gdb will call the read and write register functions with the relevant _REGNUM argument.

These macros are useful when a target keeps one of these registers in a hard to get at place; for example, part in a segment register and part in an ordinary register.

See unwind_sp, which replaces TARGET_READ_SP.

Returns a (register, offset) pair representing the virtual frame pointer in use at the code address pc. If virtual frame pointers are not used, a default definition simply returns DEPRECATED_FP_REGNUM, with an offset of zero.
If non-zero, the target has support for hardware-assisted watchpoints. See watchpoints, for more details and other related macros.
TARGET_PRINT_INSN (addr, info)
This is the function used by gdb to print an assembly instruction. It prints the instruction at address addr in debugged memory and returns the length of the instruction, in bytes. If a target doesn't define its own printing routine, it defaults to an accessor function for the global pointer deprecated_tm_print_insn. This usually points to a function in the opcodes library (see Opcodes). info is a structure (of type disassemble_info) defined in include/dis-asm.h used to pass information to the instruction decoding routine.
struct frame_id unwind_dummy_id (struct frame_info *frame)
Given frame return a struct frame_id that uniquely identifies an inferior function call's dummy frame. The value returned must match the dummy frame stack value previously saved using SAVE_DUMMY_FRAME_TOS. See SAVE_DUMMY_FRAME_TOS.
If defined, this must be an expression that is nonzero if a value of the given type being returned from a function must have space allocated for it on the stack. gcc_p is true if the function being considered is known to have been compiled by GCC; this is helpful for systems where GCC is known to use different calling convention than other compilers.

This method has been deprecated in favour of gdbarch_return_value (see gdbarch_return_value).

VALUE_TO_REGISTER(type, regnum, from, to)
Convert a value of type type into the raw contents of register regnum's. See Using Different Register and Memory Data Representations.
For dbx-style debugging information, if the compiler puts variable declarations inside LBRAC/RBRAC blocks, this should be defined to be nonzero. desc is the value of n_desc from the N_RBRAC symbol, and gcc_p is true if gdb has noticed the presence of either the GCC_COMPILED_SYMBOL or the GCC2_COMPILED_SYMBOL. By default, this is 0.
Similarly, for OS/9000. Defaults to 1.

Motorola M68K target conditionals.

Define this to be the 4-bit location of the breakpoint trap vector. If not defined, it will default to 0xf.
Defaults to 1.
A string containing the name of the function to call in order to allocate some memory in the inferior. The default value is "malloc".

9.11 Adding a New Target

The following files add a target to gdb:

Contains a Makefile fragment specific to this target. Specifies what object files are needed for target ttt, by defining `TDEPFILES=...' and `TDEPLIBS=...'. Also specifies the header file which describes ttt, by defining `TM_FILE= tm-ttt.h'.

You can also define `TM_CFLAGS', `TM_CLIBS', `TM_CDEPS', but these are now deprecated, replaced by autoconf, and may go away in future versions of gdb.

Contains any miscellaneous code required for this target machine. On some machines it doesn't exist at all. Sometimes the macros in tm-ttt.h become very complicated, so they are implemented as functions here instead, and the macro is simply defined to call the function. This is vastly preferable, since it is easier to understand and debug.
This often exists to describe the basic layout of the target machine's processor chip (registers, stack, etc.). If used, it is included by ttt-tdep.h. It can be shared among many targets that use the same processor.
(tm.h is a link to this file, created by configure). Contains macro definitions about the target machine's registers, stack frame format and instructions.

New targets do not need this file and should not create it.

This often exists to describe the basic layout of the target machine's processor chip (registers, stack, etc.). If used, it is included by tm-ttt.h. It can be shared among many targets that use the same processor.

New targets do not need this file and should not create it.

If you are adding a new operating system for an existing CPU chip, add a config/tm-os.h file that describes the operating system facilities that are unusual (extra symbol table info; the breakpoint instruction needed; etc.). Then write a arch/tm-os.h that just #includes tm-arch.h and config/tm-os.h.

9.12 Converting an existing Target Architecture to Multi-arch

This section describes the current accepted best practice for converting an existing target architecture to the multi-arch framework.

The process consists of generating, testing, posting and committing a sequence of patches. Each patch must contain a single change, for instance:

There isn't a size limit on a patch, however, a developer is strongly encouraged to keep the patch size down.

Since each patch is well defined, and since each change has been tested and shows no regressions, the patches are considered fairly obvious. Such patches, when submitted by developers listed in the MAINTAINERS file, do not need approval. Occasional steps in the process may be more complicated and less clear. The developer is expected to use their judgment and is encouraged to seek advice as needed.

9.12.1 Preparation

The first step is to establish control. Build (with -Werror enabled) and test the target so that there is a baseline against which the debugger can be compared.

At no stage can the test results regress or gdb stop compiling with -Werror.

9.12.2 Add the multi-arch initialization code

The objective of this step is to establish the basic multi-arch framework. It involves

9.12.3 Update multi-arch incompatible mechanisms

Some mechanisms do not work with multi-arch. They include:


At this stage you could also consider converting the macros into functions.

9.12.4 Prepare for multi-arch level to one

Temporally set GDB_MULTI_ARCH to GDB_MULTI_ARCH_PARTIAL and then build and start gdb (the change should not be committed). gdb may not build, and once built, it may die with an internal error listing the architecture methods that must be provided.

Fix any build problems (patch(es)).

Convert all the architecture methods listed, which are only macros, into functions (patch(es)).

Update arch_gdbarch_init to set all the missing architecture methods and wrap the corresponding macros in #if !GDB_MULTI_ARCH (patch(es)).

9.12.5 Set multi-arch level one

Change the value of GDB_MULTI_ARCH to GDB_MULTI_ARCH_PARTIAL (a single patch).

Any problems with throwing “the switch” should have been fixed already.

9.12.6 Convert remaining macros

Suggest converting macros into functions (and setting the corresponding architecture method) in small batches.

9.12.7 Set multi-arch level to two

This should go smoothly.

9.12.8 Delete the TM file

The tm-arch.h can be deleted. and updated.

Next: , Previous: Target Architecture Definition, Up: Top

10 Target Vector Definition

The target vector defines the interface between gdb's abstract handling of target systems, and the nitty-gritty code that actually exercises control over a process or a serial port. gdb includes some 30-40 different target vectors; however, each configuration of gdb includes only a few of them.

10.1 File Targets

Both executables and core files have target vectors.

10.2 Standard Protocol and Remote Stubs

gdb's file remote.c talks a serial protocol to code that runs in the target system. gdb provides several sample stubs that can be integrated into target programs or operating systems for this purpose; they are named *-stub.c.

The gdb user's manual describes how to put such a stub into your target code. What follows is a discussion of integrating the SPARC stub into a complicated operating system (rather than a simple program), by Stu Grossman, the author of this stub.

The trap handling code in the stub assumes the following upon entry to trap_low:

  1. %l1 and %l2 contain pc and npc respectively at the time of the trap;
  2. traps are disabled;
  3. you are in the correct trap window.

As long as your trap handler can guarantee those conditions, then there is no reason why you shouldn't be able to “share” traps with the stub. The stub has no requirement that it be jumped to directly from the hardware trap vector. That is why it calls exceptionHandler(), which is provided by the external environment. For instance, this could set up the hardware traps to actually execute code which calls the stub first, and then transfers to its own trap handler.

For the most point, there probably won't be much of an issue with “sharing” traps, as the traps we use are usually not used by the kernel, and often indicate unrecoverable error conditions. Anyway, this is all controlled by a table, and is trivial to modify. The most important trap for us is for ta 1. Without that, we can't single step or do breakpoints. Everything else is unnecessary for the proper operation of the debugger/stub.

From reading the stub, it's probably not obvious how breakpoints work. They are simply done by deposit/examine operations from gdb.

10.3 ROM Monitor Interface

10.4 Custom Protocols

10.5 Transport Layer

10.6 Builtin Simulator

Next: , Previous: Target Vector Definition, Up: Top

11 Native Debugging

Several files control gdb's configuration for native support:

Specifies Makefile fragments needed by a native configuration on machine xyz. In particular, this lists the required native-dependent object files, by defining `NATDEPFILES=...'. Also specifies the header file which describes native support on xyz, by defining `NAT_FILE= nm-xyz.h'. You can also define `NAT_CFLAGS', `NAT_ADD_FILES', `NAT_CLIBS', `NAT_CDEPS', etc.; see

Maintainer's note: The .mh suffix is because this file originally contained Makefile fragments for hosting gdb on machine xyz. While the file is no longer used for this purpose, the .mh suffix remains. Perhaps someone will eventually rename these fragments so that they have a .mn suffix.

(nm.h is a link to this file, created by configure). Contains C macro definitions describing the native system environment, such as child process control and core file support.
Contains any miscellaneous C code required for this native support of this machine. On some machines it doesn't exist at all.

There are some “generic” versions of routines that can be used by various systems. These can be customized in various ways by macros defined in your nm-xyz.h file. If these routines work for the xyz host, you can just include the generic file's name (with `.o', not `.c') in NATDEPFILES.

Otherwise, if your machine needs custom support routines, you will need to write routines that perform the same functions as the generic file. Put them into xyz-nat.c, and put xyz-nat.o into NATDEPFILES.

This contains the target_ops vector that supports Unix child processes on systems which use ptrace and wait to control the child.
This contains the target_ops vector that supports Unix child processes on systems which use /proc to control the child.
This does the low-level grunge that uses Unix system calls to do a “fork and exec” to start up a child process.
This is the low level interface to inferior processes for systems using the Unix ptrace call in a vanilla way.

11.1 Native core file Support

Support for reading registers out of a core file. This routine calls register_addr(), see below. Now that BFD is used to read core files, virtually all machines should use core-aout.c, and should just provide fetch_core_registers in xyz-nat.c (or REGISTER_U_ADDR in nm-xyz.h).
If your nm-xyz.h file defines the macro REGISTER_U_ADDR(addr, blockend, regno), it should be defined to set addr to the offset within the `user' struct of gdb register number regno. blockend is the offset within the “upage” of u.u_ar0. If REGISTER_U_ADDR is defined, core-aout.c will define the register_addr() function and use the macro in it. If you do not define REGISTER_U_ADDR, but you are using the standard fetch_core_registers(), you will need to define your own version of register_addr(), put it into your xyz-nat.c file, and be sure xyz-nat.o is in the NATDEPFILES list. If you have your own fetch_core_registers(), you may not need a separate register_addr(). Many custom fetch_core_registers() implementations simply locate the registers themselves.

When making gdb run native on a new operating system, to make it possible to debug core files, you will need to either write specific code for parsing your OS's core files, or customize bfd/trad-core.c. First, use whatever #include files your machine uses to define the struct of registers that is accessible (possibly in the u-area) in a core file (rather than machine/reg.h), and an include file that defines whatever header exists on a core file (e.g. the u-area or a struct core). Then modify trad_unix_core_file_p to use these values to set up the section information for the data segment, stack segment, any other segments in the core file (perhaps shared library contents or control information), “registers” segment, and if there are two discontiguous sets of registers (e.g. integer and float), the “reg2” segment. This section information basically delimits areas in the core file in a standard way, which the section-reading routines in BFD know how to seek around in.

Then back in gdb, you need a matching routine called fetch_core_registers. If you can use the generic one, it's in core-aout.c; if not, it's in your xyz-nat.c file. It will be passed a char pointer to the entire “registers” segment, its length, and a zero; or a char pointer to the entire “regs2” segment, its length, and a 2. The routine should suck out the supplied register values and install them into gdb's “registers” array.

If your system uses /proc to control processes, and uses ELF format core files, then you may be able to use the same routines for reading the registers out of processes and out of core files.

11.2 ptrace

11.3 /proc

11.4 win32

11.5 shared libraries

11.6 Native Conditionals

When gdb is configured and compiled, various macros are defined or left undefined, to control compilation when the host and target systems are the same. These macros should be defined (or left undefined) in nm-system.h.

If the machine stores all registers at once in the child process, then define this to ensure that all values are correct. This usually entails a read from the child.

[Note that this is incorrectly defined in xm-system.h files currently.]

Define this if the native-dependent code will provide its own routines fetch_inferior_registers and store_inferior_registers in host-nat.c. If this symbol is not defined, and infptrace.c is included in this configuration, the default routines in infptrace.c are used for these functions.
This macro is normally defined to be the number of the first floating point register, if the machine has such registers. As such, it would appear only in target-specific code. However, /proc support uses this to decide whether floats are in use on this target.
For most machines, this is a target-dependent parameter. On the DECstation and the Iris, this is a native-dependent parameter, since setjmp.h is needed to define it.

This macro determines the target PC address that longjmp will jump to, assuming that we have just stopped at a longjmp breakpoint. It takes a CORE_ADDR * as argument, and stores the target PC value through this pointer. It examines the current state of the machine as needed.

An x86-based machine can define this to use the generic x86 watchpoint support; see I386_USE_GENERIC_WATCHPOINTS.
Define this to the address of the u structure (the “user struct”, also known as the “u-page”) in kernel virtual memory. gdb needs to know this so that it can subtract this address from absolute addresses in the upage, that are obtained via ptrace or from core files. On systems that don't need this value, set it to zero.
Define this to cause gdb to determine the address of u at runtime, by using HP-style nlist on the kernel's image in the root directory.
Define this to be able to, when a breakpoint insertion fails, warn the user that another process may be running with the same executable.
Defines the format for the name of a /proc device. Should be defined in nm.h only in order to override the default definition in procfs.c.
The type of the third argument to the ptrace system call, if it exists and is different from int.
Defines the offset of the registers in the “u area”.
If defined, is a string to prefix on the shell command used to start the inferior.
If defined, this is the name of the shell to use to run the inferior. Defaults to "/bin/sh".
SOLIB_ADD (filename, from_tty, targ, readsyms)
Define this to expand into an expression that will cause the symbols in filename to be added to gdb's symbol table. If readsyms is zero symbols are not read but any necessary low level processing for filename is still done.
Define this to expand into any shared-library-relocation code that you want to be run just after the child process has been forked.
When starting an inferior, gdb normally expects to trap twice; once when the shell execs, and once when the program itself execs. If the actual number of traps is something other than 2, then define this macro to expand into the number expected.
This determines whether small routines in *-tdep.c, which translate register values between gdb's internal representation and the /proc representation, are compiled.
This is the offset of the registers in the upage. It need only be defined if the generic ptrace register access routines in infptrace.c are being used (that is, infptrace.c is configured in, and FETCH_INFERIOR_REGISTERS is not defined). If the default value from infptrace.c is good enough, leave it undefined.

The default value means that u.u_ar0 points to the location of the registers. I'm guessing that #define U_REGS_OFFSET 0 means that u.u_ar0 is the location of the registers.

See objfiles.c.
Define this to debug ptrace calls.

Next: , Previous: Native Debugging, Up: Top

12 Support Libraries

12.1 BFD

BFD provides support for gdb in several ways:

identifying executable and core files
BFD will identify a variety of file types, including a.out, coff, and several variants thereof, as well as several kinds of core files.
access to sections of files
BFD parses the file headers to determine the names, virtual addresses, sizes, and file locations of all the various named sections in files (such as the text section or the data section). gdb simply calls BFD to read or write section x at byte offset y for length z.
specialized core file support
BFD provides routines to determine the failing command name stored in a core file, the signal with which the program failed, and whether a core file matches (i.e. could be a core dump of) a particular executable file.
locating the symbol information
gdb uses an internal interface of BFD to determine where to find the symbol information in an executable file or symbol-file. gdb itself handles the reading of symbols, since BFD does not “understand” debug symbols, but gdb uses BFD's cached information to find the symbols, string table, etc.

12.2 opcodes

The opcodes library provides gdb's disassembler. (It's a separate library because it's also used in binutils, for objdump).

12.3 readline

12.4 mmalloc

12.5 libiberty

The libiberty library provides a set of functions and features that integrate and improve on functionality found in modern operating systems. Broadly speaking, such features can be divided into three groups: supplemental functions (functions that may be missing in some environments and operating systems), replacement functions (providing a uniform and easier to use interface for commonly used standard functions), and extensions (which provide additional functionality beyond standard functions).

gdb uses various features provided by the libiberty library, for instance the C++ demangler, the IEEE floating format support functions, the input options parser `getopt', the `obstack' extension, and other functions.

12.5.1 obstacks in gdb

The obstack mechanism provides a convenient way to allocate and free chunks of memory. Each obstack is a pool of memory that is managed like a stack. Objects (of any nature, size and alignment) are allocated and freed in a LIFO fashion on an obstack (see libiberty's documenatation for a more detailed explanation of obstacks).

The most noticeable use of the obstacks in gdb is in object files. There is an obstack associated with each internal representation of an object file. Lots of things get allocated on these obstacks: dictionary entries, blocks, blockvectors, symbols, minimal symbols, types, vectors of fundamental types, class fields of types, object files section lists, object files section offets lists, line tables, symbol tables, partial symbol tables, string tables, symbol table private data, macros tables, debug information sections and entries, import and export lists (som), unwind information (hppa), dwarf2 location expressions data. Plus various strings such as directory names strings, debug format strings, names of types.

An essential and convenient property of all data on obstacks is that memory for it gets allocated (with obstack_alloc) at various times during a debugging sesssion, but it is released all at once using the obstack_free function. The obstack_free function takes a pointer to where in the stack it must start the deletion from (much like the cleanup chains have a pointer to where to start the cleanups). Because of the stack like structure of the obstacks, this allows to free only a top portion of the obstack. There are a few instances in gdb where such thing happens. Calls to obstack_free are done after some local data is allocated to the obstack. Only the local data is deleted from the obstack. Of course this assumes that nothing between the obstack_alloc and the obstack_free allocates anything else on the same obstack. For this reason it is best and safest to use temporary obstacks.

Releasing the whole obstack is also not safe per se. It is safe only under the condition that we know the obstacks memory is no longer needed. In gdb we get rid of the obstacks only when we get rid of the whole objfile(s), for instance upon reading a new symbol file.

12.6 gnu-regex

Regex conditionals.


12.7 include

Next: , Previous: Support Libraries, Up: Top

13 Coding

This chapter covers topics that are lower-level than the major algorithms of gdb.

13.1 Cleanups

Cleanups are a structured way to deal with things that need to be done later.

When your code does something (e.g., xmalloc some memory, or open a file) that needs to be undone later (e.g., xfree the memory or close the file), it can make a cleanup. The cleanup will be done at some future point: when the command is finished and control returns to the top level; when an error occurs and the stack is unwound; or when your code decides it's time to explicitly perform cleanups. Alternatively you can elect to discard the cleanups you created.


struct cleanup *old_chain;
Declare a variable which will hold a cleanup chain handle.

old_chain = make_cleanup (function, arg);
Make a cleanup which will cause function to be called with arg (a char *) later. The result, old_chain, is a handle that can later be passed to do_cleanups or discard_cleanups. Unless you are going to call do_cleanups or discard_cleanups, you can ignore the result from make_cleanup.

do_cleanups (old_chain);
Do all cleanups added to the chain since the corresponding make_cleanup call was made.

discard_cleanups (old_chain);
Same as do_cleanups except that it just removes the cleanups from the chain and does not call the specified functions.

Cleanups are implemented as a chain. The handle returned by make_cleanups includes the cleanup passed to the call and any later cleanups appended to the chain (but not yet discarded or performed). E.g.:

     make_cleanup (a, 0);
       struct cleanup *old = make_cleanup (b, 0);
       make_cleanup (c, 0)
       do_cleanups (old);

will call c() and b() but will not call a(). The cleanup that calls a() will remain in the cleanup chain, and will be done later unless otherwise discarded.

Your function should explicitly do or discard the cleanups it creates. Failing to do this leads to non-deterministic behavior since the caller will arbitrarily do or discard your functions cleanups. This need leads to two common cleanup styles.

The first style is try/finally. Before it exits, your code-block calls do_cleanups with the old cleanup chain and thus ensures that your code-block's cleanups are always performed. For instance, the following code-segment avoids a memory leak problem (even when error is called and a forced stack unwind occurs) by ensuring that the xfree will always be called:

     struct cleanup *old = make_cleanup (null_cleanup, 0);
     data = xmalloc (sizeof blah);
     make_cleanup (xfree, data);
     ... blah blah ...
     do_cleanups (old);

The second style is try/except. Before it exits, your code-block calls discard_cleanups with the old cleanup chain and thus ensures that any created cleanups are not performed. For instance, the following code segment, ensures that the file will be closed but only if there is an error:

     FILE *file = fopen ("afile", "r");
     struct cleanup *old = make_cleanup (close_file, file);
     ... blah blah ...
     discard_cleanups (old);
     return file;

Some functions, e.g. fputs_filtered() or error(), specify that they “should not be called when cleanups are not in place”. This means that any actions you need to reverse in the case of an error or interruption must be on the cleanup chain before you call these functions, since they might never return to your code (they `longjmp' instead).

13.2 Per-architecture module data

The multi-arch framework includes a mechanism for adding module specific per-architecture data-pointers to the struct gdbarch architecture object.

A module registers one or more per-architecture data-pointers using:

— Function: struct gdbarch_data *gdbarch_data_register_pre_init (gdbarch_data_pre_init_ftype *pre_init)

pre_init is used to, on-demand, allocate an initial value for a per-architecture data-pointer using the architecture's obstack (passed in as a parameter). Since pre_init can be called during architecture creation, it is not parameterized with the architecture. and must not call modules that use per-architecture data.

— Function: struct gdbarch_data *gdbarch_data_register_post_init (gdbarch_data_post_init_ftype *post_init)

post_init is used to obtain an initial value for a per-architecture data-pointer after. Since post_init is always called after architecture creation, it both receives the fully initialized architecture and is free to call modules that use per-architecture data (care needs to be taken to ensure that those other modules do not try to call back to this module as that will create in cycles in the initialization call graph).

These functions return a struct gdbarch_data that is used to identify the per-architecture data-pointer added for that module.

The per-architecture data-pointer is accessed using the function:

— Function: void *gdbarch_data (struct gdbarch *gdbarch, struct gdbarch_data *data_handle)

Given the architecture arch and module data handle data_handle (returned by gdbarch_data_register_pre_init or gdbarch_data_register_post_init), this function returns the current value of the per-architecture data-pointer. If the data pointer is NULL, it is first initialized by calling the corresponding pre_init or post_init method.

The examples below assume the following definitions:

     struct nozel { int total; };
     static struct gdbarch_data *nozel_handle;

A module can extend the architecture vector, adding additional per-architecture data, using the pre_init method. The module's per-architecture data is then initialized during architecture creation.

In the below, the module's per-architecture nozel is added. An architecture can specify its nozel by calling set_gdbarch_nozel from gdbarch_init.

     static void *
     nozel_pre_init (struct obstack *obstack)
       struct nozel *data = OBSTACK_ZALLOC (obstack, struct nozel);
       return data;
     extern void
     set_gdbarch_nozel (struct gdbarch *gdbarch, int total)
       struct nozel *data = gdbarch_data (gdbarch, nozel_handle);
       data->total = nozel;

A module can on-demand create architecture dependant data structures using post_init.

In the below, the nozel's total is computed on-demand by nozel_post_init using information obtained from the architecture.

     static void *
     nozel_post_init (struct gdbarch *gdbarch)
       struct nozel *data = GDBARCH_OBSTACK_ZALLOC (gdbarch, struct nozel);
       nozel->total = gdbarch... (gdbarch);
       return data;
     extern int
     nozel_total (struct gdbarch *gdbarch)
       struct nozel *data = gdbarch_data (gdbarch, nozel_handle);
       return data->total;

13.3 Wrapping Output Lines

Output that goes through printf_filtered or fputs_filtered or fputs_demangled needs only to have calls to wrap_here added in places that would be good breaking points. The utility routines will take care of actually wrapping if the line width is exceeded.

The argument to wrap_here is an indentation string which is printed only if the line breaks there. This argument is saved away and used later. It must remain valid until the next call to wrap_here or until a newline has been printed through the *_filtered functions. Don't pass in a local variable and then return!

It is usually best to call wrap_here after printing a comma or space. If you call it before printing a space, make sure that your indentation properly accounts for the leading space that will print if the line wraps there.

Any function or set of functions that produce filtered output must finish by printing a newline, to flush the wrap buffer, before switching to unfiltered (printf) output. Symbol reading routines that print warnings are a good example.

13.4 gdb Coding Standards

gdb follows the GNU coding standards, as described in etc/standards.texi. This file is also available for anonymous FTP from GNU archive sites. gdb takes a strict interpretation of the standard; in general, when the GNU standard recommends a practice but does not require it, gdb requires it.

gdb follows an additional set of coding standards specific to gdb, as described in the following sections.

13.4.1 ISO C

gdb assumes an ISO/IEC 9899:1990 (a.k.a. ISO C90) compliant compiler.

gdb does not assume an ISO C or POSIX compliant C library.

13.4.2 Memory Management

gdb does not use the functions malloc, realloc, calloc, free and asprintf.

gdb uses the functions xmalloc, xrealloc and xcalloc when allocating memory. Unlike malloc these functions do not return when the memory pool is empty. Instead, they unwind the stack using cleanups. These functions return NULL when requested to allocate a chunk of memory of size zero.

Pragmatics: By using these functions, the need to check every memory allocation is removed. These functions provide portable behavior.

gdb does not use the function free.

gdb uses the function xfree to return memory to the memory pool. Consistent with ISO-C, this function ignores a request to free a NULL pointer.

Pragmatics: On some systems free fails when passed a NULL pointer.

gdb can use the non-portable function alloca for the allocation of small temporary values (such as strings).

Pragmatics: This function is very non-portable. Some systems restrict the memory being allocated to no more than a few kilobytes.

gdb uses the string function xstrdup and the print function xstrprintf.

Pragmatics: asprintf and strdup can fail. Print functions such as sprintf are very prone to buffer overflow errors.

13.4.3 Compiler Warnings

With few exceptions, developers should include the configuration option `--enable-gdb-build-warnings=,-Werror' when building gdb. The exceptions are listed in the file gdb/MAINTAINERS.

This option causes gdb (when built using GCC) to be compiled with a carefully selected list of compiler warning flags. Any warnings from those flags being treated as errors.

The current list of warning flags includes:

Since gdb coding standard requires all functions to be declared using a prototype, the flag has the side effect of ensuring that prototyped functions are always visible with out resorting to `-Wstrict-prototypes'.
Such code often appears to work except on instruction set architectures that use register windows.
Since gdb uses the format printf attribute on all printf like functions these check not just printf calls but also calls to functions such as fprintf_unfiltered.
This warning includes uses of the assignment operator within an if statement.
This warning has the additional benefit of detecting the absence of the case reserved word in a switch statement:
          enum { FD_SCHEDULED, NOTHING_SCHEDULED } sched;
          switch (sched)
            case FD_SCHEDULED:


Pragmatics: Due to the way that gdb is implemented most functions have unused parameters. Consequently the warning `-Wunused-parameter' is precluded from the list. The macro ATTRIBUTE_UNUSED is not used as it leads to false negatives — it is not an error to have ATTRIBUTE_UNUSED on a parameter that is being used. The options `-Wall' and `-Wunused' are also precluded because they both include `-Wunused-parameter'.

Pragmatics: gdb has not simply accepted the warnings enabled by `-Wall -Werror -W...'. Instead it is selecting warnings when and where their benefits can be demonstrated.

13.4.4 Formatting

The standard GNU recommendations for formatting must be followed strictly.

A function declaration should not have its name in column zero. A function definition should have its name in column zero.

     /* Declaration */
     static void foo (void);
     /* Definition */
     foo (void)

Pragmatics: This simplifies scripting. Function definitions can be found using `^function-name'.

There must be a space between a function or macro name and the opening parenthesis of its argument list (except for macro definitions, as required by C). There must not be a space after an open paren/bracket or before a close paren/bracket.

While additional whitespace is generally helpful for reading, do not use more than one blank line to separate blocks, and avoid adding whitespace after the end of a program line (as of 1/99, some 600 lines had whitespace after the semicolon). Excess whitespace causes difficulties for diff and patch utilities.

Pointers are declared using the traditional K&R C style:

     void *foo;

and not:

     void * foo;
     void* foo;


The standard GNU requirements on comments must be followed strictly.

Block comments must appear in the following form, with no /*- or */-only lines, and no leading *:

     /* Wait for control to return from inferior to debugger.  If inferior
        gets a signal, we may decide to start it up again instead of
        returning.  That is why there is a loop in this function.  When
        this function actually returns it means the inferior should be left
        stopped and gdb should read more commands.  */

(Note that this format is encouraged by Emacs; tabbing for a multi-line comment works correctly, and M-q fills the block consistently.)

Put a blank line between the block comments preceding function or variable definitions, and the definition itself.

In general, put function-body comments on lines by themselves, rather than trying to fit them into the 20 characters left at the end of a line, since either the comment or the code will inevitably get longer than will fit, and then somebody will have to move it anyhow.

13.4.6 C Usage

Code must not depend on the sizes of C data types, the format of the host's floating point numbers, the alignment of anything, or the order of evaluation of expressions.

Use functions freely. There are only a handful of compute-bound areas in gdb that might be affected by the overhead of a function call, mainly in symbol reading. Most of gdb's performance is limited by the target interface (whether serial line or system call).

However, use functions with moderation. A thousand one-line functions are just as hard to understand as a single thousand-line function.

Macros are bad, M'kay. (But if you have to use a macro, make sure that the macro arguments are protected with parentheses.)

Declarations like `struct foo *' should be used in preference to declarations like `typedef struct foo { ... } *foo_ptr'.

13.4.7 Function Prototypes

Prototypes must be used when both declaring and defining a function. Prototypes for gdb functions must include both the argument type and name, with the name matching that used in the actual function definition.

All external functions should have a declaration in a header file that callers include, except for _initialize_* functions, which must be external so that init.c construction works, but shouldn't be visible to random source files.

Where a source file needs a forward declaration of a static function, that declaration must appear in a block near the top of the source file.

13.4.8 Internal Error Recovery

During its execution, gdb can encounter two types of errors. User errors and internal errors. User errors include not only a user entering an incorrect command but also problems arising from corrupt object files and system errors when interacting with the target. Internal errors include situations where gdb has detected, at run time, a corrupt or erroneous situation.

When reporting an internal error, gdb uses internal_error and gdb_assert.

gdb must not call abort or assert.

Pragmatics: There is no internal_warning function. Either the code detected a user error, recovered from it and issued a warning or the code failed to correctly recover from the user error and issued an internal_error.

13.4.9 File Names

Any file used when building the core of gdb must be in lower case. Any file used when building the core of gdb must be 8.3 unique. These requirements apply to both source and generated files.

Pragmatics: The core of gdb must be buildable on many platforms including DJGPP and MacOS/HFS. Every time an unfriendly file is introduced to the build process both and need to be modified accordingly. Compare the convoluted conversion process needed to transform COPYING into copying.c with the conversion needed to transform into version.c.

Any file non 8.3 compliant file (that is not used when building the core of gdb) must be added to gdb/config/djgpp/fnchange.lst.

Pragmatics: This is clearly a compromise.

When gdb has a local version of a system header file (ex string.h) the file name based on the POSIX header prefixed with gdb_ (gdb_string.h). These headers should be relatively independent: they should use only macros defined by configure, the compiler, or the host; they should include only system headers; they should refer only to system types. They may be shared between multiple programs, e.g. gdb and gdbserver.

For other files `-' is used as the separator.

13.4.10 Include Files

A .c file should include defs.h first.

A .c file should directly include the .h file of every declaration and/or definition it directly refers to. It cannot rely on indirect inclusion.

A .h file should directly include the .h file of every declaration and/or definition it directly refers to. It cannot rely on indirect inclusion. Exception: The file defs.h does not need to be directly included.

An external declaration should only appear in one include file.

An external declaration should never appear in a .c file. Exception: a declaration for the _initialize function that pacifies -Wmissing-declaration.

A typedef definition should only appear in one include file.

An opaque struct declaration can appear in multiple .h files. Where possible, a .h file should use an opaque struct declaration instead of an include.

All .h files should be wrapped in:

     #ifndef INCLUDE_FILE_NAME_H
     #define INCLUDE_FILE_NAME_H
     header body

13.4.11 Clean Design and Portable Implementation

In addition to getting the syntax right, there's the little question of semantics. Some things are done in certain ways in gdb because long experience has shown that the more obvious ways caused various kinds of trouble.

You can't assume the byte order of anything that comes from a target (including values, object files, and instructions). Such things must be byte-swapped using SWAP_TARGET_AND_HOST in gdb, or one of the swap routines defined in bfd.h, such as bfd_get_32.

You can't assume that you know what interface is being used to talk to the target system. All references to the target must go through the current target_ops vector.

You can't assume that the host and target machines are the same machine (except in the “native” support modules). In particular, you can't assume that the target machine's header files will be available on the host machine. Target code must bring along its own header files – written from scratch or explicitly donated by their owner, to avoid copyright problems.

Insertion of new #ifdef's will be frowned upon. It's much better to write the code portably than to conditionalize it for various systems.

New #ifdef's which test for specific compilers or manufacturers or operating systems are unacceptable. All #ifdef's should test for features. The information about which configurations contain which features should be segregated into the configuration files. Experience has proven far too often that a feature unique to one particular system often creeps into other systems; and that a conditional based on some predefined macro for your current system will become worthless over time, as new versions of your system come out that behave differently with regard to this feature.

Adding code that handles specific architectures, operating systems, target interfaces, or hosts, is not acceptable in generic code.

One particularly notorious area where system dependencies tend to creep in is handling of file names. The mainline gdb code assumes Posix semantics of file names: absolute file names begin with a forward slash /, slashes are used to separate leading directories, case-sensitive file names. These assumptions are not necessarily true on non-Posix systems such as MS-Windows. To avoid system-dependent code where you need to take apart or construct a file name, use the following portable macros:

This preprocessing symbol is defined to a non-zero value on hosts whose filesystems belong to the MS-DOS/MS-Windows family. Use this symbol to write conditional code which should only be compiled for such hosts.

Evaluates to a non-zero value if c is a directory separator character. On Unix and GNU/Linux systems, only a slash / is such a character, but on Windows, both / and \ will pass.

Evaluates to a non-zero value if file is an absolute file name. For Unix and GNU/Linux hosts, a name which begins with a slash / is absolute. On DOS and Windows, d:/foo and x:\bar are also absolute file names.

Calls a function which compares file names f1 and f2 as appropriate for the underlying host filesystem. For Posix systems, this simply calls strcmp; on case-insensitive filesystems it will call strcasecmp instead.

Evaluates to a character which separates directories in PATH-style lists, typically held in environment variables. This character is `:' on Unix, `;' on DOS and Windows.

This evaluates to a constant string you should use to produce an absolute filename from leading directories and the file's basename. SLASH_STRING is "/" on most systems, but might be "\\" for some Windows-based ports.

In addition to using these macros, be sure to use portable library functions whenever possible. For example, to extract a directory or a basename part from a file name, use the dirname and basename library functions (available in libiberty for platforms which don't provide them), instead of searching for a slash with strrchr.

Another way to generalize gdb along a particular interface is with an attribute struct. For example, gdb has been generalized to handle multiple kinds of remote interfaces—not by #ifdefs everywhere, but by defining the target_ops structure and having a current target (as well as a stack of targets below it, for memory references). Whenever something needs to be done that depends on which remote interface we are using, a flag in the current target_ops structure is tested (e.g., target_has_stack), or a function is called through a pointer in the current target_ops structure. In this way, when a new remote interface is added, only one module needs to be touched—the one that actually implements the new remote interface. Other examples of attribute-structs are BFD access to multiple kinds of object file formats, or gdb's access to multiple source languages.

Please avoid duplicating code. For example, in gdb 3.x all the code interfacing between ptrace and the rest of gdb was duplicated in *-dep.c, and so changing something was very painful. In gdb 4.x, these have all been consolidated into infptrace.c. infptrace.c can deal with variations between systems the same way any system-independent file would (hooks, #if defined, etc.), and machines which are radically different don't need to use infptrace.c at all.

All debugging code must be controllable using the `set debug module' command. Do not use printf to print trace messages. Use fprintf_unfiltered(gdb_stdlog, .... Do not use #ifdef DEBUG.

Next: , Previous: Coding, Up: Top

14 Porting gdb

Most of the work in making gdb compile on a new machine is in specifying the configuration of the machine. This is done in a dizzying variety of header files and configuration scripts, which we hope to make more sensible soon. Let's say your new host is called an xyz (e.g., `sun4'), and its full three-part configuration name is arch-xvend-xos (e.g., `sparc-sun-sunos4'). In particular:

Next: , Previous: Porting GDB, Up: Top

15 Versions and Branches

15.1 Versions

gdb's version is determined by the file gdb/ and takes one of the following forms:

an official release (e.g., 6.2 or 6.2.1)
a snapshot taken at YYYY-MM-DD-gmt (e.g.,,, or
a cvs check out drawn on YYYY-MM-DD (e.g.,,, or
major.minor.patchlevel.YYYYMMDD (vendor)
a vendor specific release of gdb, that while based on
major.minor.patchlevel.YYYYMMDD, may include additional changes

gdb's mainline uses the major and minor version numbers from the most recent release branch, with a patchlevel of 50. At the time each new release branch is created, the mainline's major and minor version numbers are updated.

gdb's release branch is similar. When the branch is cut, the patchlevel is changed from 50 to 90. As draft releases are drawn from the branch, the patchlevel is incremented. Once the first release (major.minor) has been made, the patchlevel is set to 0 and updates have an incremented patchlevel.

For snapshots, and cvs check outs, it is also possible to identify the cvs origin:

drawn from the head of mainline cvs (e.g.,
major.minor.91.YYYYMMDD ...
drawn from a release branch prior to the release (e.g.,
major.minor.1.YYYYMMDD ...
drawn from a release branch after the release (e.g.,

If the previous gdb version is 6.1 and the current version is 6.2, then, substituting 6 for major and 1 or 2 for minor, here's an illustration of a typical sequence:

             |                    <gdb_6_2-branch>
             |                          |        6.1.90 (draft #1)
             |                          |
             |                          |        6.1.91 (draft #2)
             |                          |
             |                          |        6.2 (release)
             |                          |
             |                          |        6.2.1 (update)
             |                          |         <branch closed>
             |                     <gdb_6_3-branch>
             |                          |        6.2.90 (draft #1)
             |                          |

15.2 Release Branches

gdb draws a release series (6.2, 6.2.1, ...) from a single release branch, and identifies that branch using the cvs branch tags:


Pragmatics: To help identify the date at which a branch or release is made, both the branchpoint and release tags include the date that they are cut (YYYYMMDD) in the tag. The branch tag, denoting the head of the branch, does not need this.

15.3 Vendor Branches

To avoid version conflicts, vendors are expected to modify the file gdb/ to include a vendor unique alphabetic identifier (an official gdb release never uses alphabetic characters in its version identifer). E.g., `6.2widgit2', or `6.2 (Widgit Inc Patch 2)'.

15.4 Experimental Branches

15.4.1 Guidelines

gdb permits the creation of branches, cut from the cvs repository, for experimental development. Branches make it possible for developers to share preliminary work, and maintainers to examine significant new developments.

The following are a set of guidelines for creating such branches:

a branch has an owner
The owner can set further policy for a branch, but may not change the ground rules. In particular, they can set a policy for commits (be it adding more reviewers or deciding who can commit).
all commits are posted
All changes committed to a branch shall also be posted to the gdb patches mailing list. While commentary on such changes are encouraged, people should remember that the changes only apply to a branch.
all commits are covered by an assignment
This ensures that all changes belong to the Free Software Foundation, and avoids the possibility that the branch may become contaminated.
a branch is focused
A focused branch has a single objective or goal, and does not contain unnecessary or irrelevant changes. Cleanups, where identified, being be pushed into the mainline as soon as possible.
a branch tracks mainline
This keeps the level of divergence under control. It also keeps the pressure on developers to push cleanups and other stuff into the mainline.
a branch shall contain the entire gdb module
The gdb module gdb should be specified when creating a branch (branches of individual files should be avoided). See Tags.
a branch shall be branded using
The file gdb/ shall be modified so that it identifies the branch owner and branch name, e.g., `' or `6.2 (Owner Name)'.

15.4.2 Tags

To simplify the identification of gdb branches, the following branch tagging convention is strongly recommended:

The branch point and corresponding branch tag. YYYYMMDD is the date that the branch was created. A branch is created using the sequence:
          cvs rtag owner_name-YYYYMMDD-branchpoint gdb
          cvs rtag -b -r owner_name-YYYYMMDD-branchpoint \
             owner_name-YYYYMMDD-branch gdb

The tagged point, on the mainline, that was used when merging the branch on yyyymmdd. To merge in all changes since the branch was cut, use a command sequence like:
          cvs rtag owner_name-yyyymmdd-mergepoint gdb
          cvs update \

Similar sequences can be used to just merge in changes since the last merge.

For further information on cvs, see Concurrent Versions System.

Next: , Previous: Versions and Branches, Up: Top

16 Releasing gdb

16.1 Branch Commit Policy

The branch commit policy is pretty slack. gdb releases 5.0, 5.1 and 5.2 all used the below:

Pragmatics: Provided updates are restricted to non-core functionality there is little chance that a broken change will be fatal. This means that changes such as adding a new architectures or (within reason) support for a new host are considered acceptable.

16.2 Obsoleting code

Before anything else, poke the other developers (and around the source code) to see if there is anything that can be removed from gdb (an old target, an unused file).

Obsolete code is identified by adding an OBSOLETE prefix to every line. Doing this means that it is easy to identify something that has been obsoleted when greping through the sources.

The process is done in stages — this is mainly to ensure that the wider gdb community has a reasonable opportunity to respond. Remember, everything on the Internet takes a week.

  1. Post the proposal on the GDB mailing list Creating a bug report to track the task's state, is also highly recommended.
  2. Wait a week or so.
  3. Post the proposal on the GDB Announcement mailing list.
  4. Wait a week or so.
  5. Go through and edit all relevant files and lines so that they are prefixed with the word OBSOLETE.
  6. Wait until the next GDB version, containing this obsolete code, has been released.
  7. Remove the obsolete code.

Maintainer note: While removing old code is regrettable it is hopefully better for gdb's long term development. Firstly it helps the developers by removing code that is either no longer relevant or simply wrong. Secondly since it removes any history associated with the file (effectively clearing the slate) the developer has a much freer hand when it comes to fixing broken files.

16.3 Before the Branch

The most important objective at this stage is to find and fix simple changes that become a pain to track once the branch is created. For instance, configuration problems that stop gdb from even building. If you can't get the problem fixed, document it in the gdb/PROBLEMS file.

Prompt for gdb/NEWS

People always forget. Send a post reminding them but also if you know something interesting happened add it yourself. The schedule script will mention this in its e-mail.

Review gdb/README

Grab one of the nightly snapshots and then walk through the gdb/README looking for anything that can be improved. The schedule script will mention this in its e-mail.

Refresh any imported files.

A number of files are taken from external repositories. They include:

Check the ARI

A.R.I. is an awk script (Awk Regression Index ;-) that checks for a number of errors and coding conventions. The checks include things like using malloc instead of xmalloc and file naming problems. There shouldn't be any regressions.

16.3.1 Review the bug data base

Close anything obviously fixed.

16.3.2 Check all cross targets build

The targets are listed in gdb/MAINTAINERS.

16.4 Cut the Branch

Create the branch

     $  u=5.1
     $  v=5.2
     $  V=`echo $v | sed 's/\./_/g'`
     $  D=`date -u +%Y-%m-%d`
     $  echo $u $V $D
     5.1 5_2 2002-03-03
     $  echo cvs -f -d rtag \
     -D $D-gmt gdb_$V-$D-branchpoint insight+dejagnu
     cvs -f -d rtag
     -D 2002-03-03-gmt gdb_5_2-2002-03-03-branchpoint insight+dejagnu
     $  ^echo ^^
     $  echo cvs -f -d rtag \
     -b -r gdb_$V-$D-branchpoint gdb_$V-branch insight+dejagnu
     cvs -f -d rtag \
     -b -r gdb_5_2-2002-03-03-branchpoint gdb_5_2-branch insight+dejagnu
     $  ^echo ^^


     $  u=5.1
     $  v=5.2
     $  V=`echo $v | sed 's/\./_/g'`
     $  echo $u $v$V
     5.1 5_2
     $  cd /tmp
     $  echo cvs -f -d co \
     -r gdb_$V-branch src/gdb/
     cvs -f -d co
      -r gdb_5_2-branch src/gdb/
     $  ^echo ^^
     U src/gdb/
     $  cd src/gdb
     $  echo $u.90-0000-00-00-cvs >
     $  cat
     $  cvs -f commit

Update the web and news pages


Tweak cron to track the new branch

The file gdbadmin/cron/crontab contains gdbadmin's cron table. This file needs to be updated so that:

See the file gdbadmin/cron/README for how to install the updated cron table.

The file gdbadmin/ss/README should also be reviewed to reflect any changes. That file is copied to both the branch/ and current/ snapshot directories.

Update the NEWS and README files

The NEWS file needs to be updated so that on the branch it refers to changes in the current release while on the trunk it also refers to changes since the current release.

The README file needs to be updated so that it refers to the current release.

Post the branch info

Send an announcement to the mailing lists:

Pragmatics: The branch creation is sent to the announce list to ensure that people people not subscribed to the higher volume discussion list are alerted.

The announcement should include:

16.5 Stabilize the branch

Something goes here.

16.6 Create a Release

The process of creating and then making available a release is broken down into a number of stages. The first part addresses the technical process of creating a releasable tar ball. The later stages address the process of releasing that tar ball.

When making a release candidate just the first section is needed.

16.6.1 Create a release candidate

The objective at this stage is to create a set of tar balls that can be made available as a formal release (or as a less formal release candidate).

Freeze the branch

Send out an e-mail notifying everyone that the branch is frozen to

Establish a few defaults.
     $  b=gdb_5_2-branch
     $  v=5.2
     $  t=/sourceware/snapshot-tmp/gdbadmin-tmp
     $  echo $t/$b/$v
     $  mkdir -p $t/$b/$v
     $  cd $t/$b/$v
     $  pwd
     $  which autoconf


Check out the relevant modules:
     $  for m in gdb insight dejagnu
     ( mkdir -p $m && cd $m && cvs -q -f -d /cvs/src co -P -r $b $m )


Update relevant files.
Major releases get their comments added as part of the mainline. Minor releases should probably mention any significant bugs that were fixed.

Don't forget to include the ChangeLog entry.

          $  emacs gdb/src/gdb/NEWS
          c-x 4 a
          c-x c-s c-x c-c
          $  cp gdb/src/gdb/NEWS insight/src/gdb/NEWS
          $  cp gdb/src/gdb/ChangeLog insight/src/gdb/ChangeLog

You'll need to update:
          $  emacs gdb/src/gdb/README
          c-x 4 a
          c-x c-s c-x c-c
          $  cp gdb/src/gdb/README insight/src/gdb/README
          $  cp gdb/src/gdb/ChangeLog insight/src/gdb/ChangeLog

Maintainer note: Hopefully the README file was reviewed before the initial branch was cut so just a simple substitute is needed to get it updated.

Maintainer note: Other projects generate README and INSTALL from the core documentation. This might be worth pursuing.

          $  echo $v > gdb/src/gdb/
          $  cat gdb/src/gdb/
          $  emacs gdb/src/gdb/
          c-x 4 a
          ... Bump to version ...
          c-x c-s c-x c-c
          $  cp gdb/src/gdb/ insight/src/gdb/
          $  cp gdb/src/gdb/ChangeLog insight/src/gdb/ChangeLog

Dejagnu is more complicated. The version number is a parameter to AM_INIT_AUTOMAKE. Tweak it to read something like gdb-5.1.91.

Don't forget to re-generate configure.

Don't forget to include a ChangeLog entry.

          $  emacs dejagnu/src/dejagnu/
          c-x 4 a
          c-x c-s c-x c-c
          $  ( cd  dejagnu/src/dejagnu && autoconf )
Do the dirty work

This is identical to the process used to create the daily snapshot.

     $  for m in gdb insight
     ( cd $m/src && gmake -f src-release $m.tar )
     $  ( m=dejagnu; cd $m/src && gmake -f src-release $m.tar.bz2 )

If the top level source directory does not have src-release (gdb version 5.3.1 or earlier), try these commands instead:

     $  for m in gdb insight
     ( cd $m/src && gmake -f $m.tar )
     $  ( m=dejagnu; cd $m/src && gmake -f $m.tar.bz2 )
Check the source files

You're looking for files that have mysteriously disappeared. distclean has the habit of deleting files it shouldn't. Watch out for the update cronjob.

     $  ( cd gdb/src && cvs -f -q -n update )
     M djunpack.bat
     ? gdb-5.1.91.tar
     ? proto-toplev
     ... lots of generated files ...
     M gdb/ChangeLog
     M gdb/NEWS
     M gdb/README
     M gdb/
     ... lots of generated files ...

Don't worry about the or gdb/ They were generated (and yes was also generated only something strange with CVS means that they didn't get supressed). Fixing it would be nice though.

Create compressed versions of the release
     $  cp */src/*.tar .
     $  cp */src/*.bz2 .
     $  ls -F
     dejagnu/ dejagnu-gdb-5.2.tar.bz2 gdb/ gdb-5.2.tar insight/ insight-5.2.tar
     $  for m in gdb insight
     bzip2 -v -9 -c $m-$v.tar > $m-$v.tar.bz2
     gzip -v -9 -c $m-$v.tar > $m-$v.tar.gz


16.6.2 Sanity check the tar ball

Pick a popular machine (Solaris/PPC?) and try the build on that.

     $  bunzip2 < gdb-5.2.tar.bz2 | tar xpf -
     $  cd gdb-5.2
     $  ./configure
     $  make
     $  ./gdb/gdb ./gdb/gdb
     GNU gdb 5.2
     (gdb)  b main
     Breakpoint 1 at 0x80732bc: file main.c, line 734.
     (gdb)  run
     Starting program: /tmp/gdb-5.2/gdb/gdb
     Breakpoint 1, main (argc=1, argv=0xbffff8b4) at main.c:734
     734       catch_errors (captured_main, &args, "", RETURN_MASK_ALL);
     (gdb)  print args
     $1 = {argc = 136426532, argv = 0x821b7f0}

16.6.3 Make a release candidate available

If this is a release candidate then the only remaining steps are:

  1. Commit and ChangeLog
  2. Tweak (and ChangeLog to read L.M.N-0000-00-00-cvs so that the version update process can restart.
  3. Make the release candidate available in
  4. Notify the relevant mailing lists ( and that the candidate is available.

16.6.4 Make a formal release available

(And you thought all that was required was to post an e-mail.)

Install on sware

Copy the new files to both the release and the old release directory:

     $  cp *.bz2 *.gz ~ftp/pub/gdb/old-releases/
     $  cp *.bz2 *.gz ~ftp/pub/gdb/releases

Clean up the releases directory so that only the most recent releases are available (e.g. keep 5.2 and 5.2.1 but remove 5.1):

     $  cd ~ftp/pub/gdb/releases
     $  rm ...

Update the file README and .message in the releases directory:

     $  vi README
     $  rm -f .message
     $  ln README .message
Update the web pages.
This file, which is posted as the official announcement, includes:
These files include: These pages also need to be regenerate using
You need to find the magic command that is used to generate the online docs from the .tar.bz2. The best way is to look in the output from one of the nightly cron jobs and then just edit accordingly. Something like:
          $  ~/ss/update-web-docs \
           ~ftp/pub/gdb/releases/gdb-5.2.tar.bz2 \
           $PWD/www \
           /www/sourceware/htdocs/gdb/download/onlinedocs \

Just like the online documentation. Something like:
          $  /bin/sh ~/ss/update-web-ari \
           ~ftp/pub/gdb/releases/gdb-5.2.tar.bz2 \
           $PWD/www \
           /www/sourceware/htdocs/gdb/download/ari \
Shadow the pages onto gnu

Something goes here.

Install the gdb tar ball on GNU

At the time of writing, the GNU machine was in ~ftp/gnu/gdb.


Post the ANNOUNCEMENT file you created above to:

16.6.5 Cleanup

The release is out but you're still not finished.

Commit outstanding changes

In particular you'll need to commit any changes to:

Tag the release

Something like:

     $  d=`date -u +%Y-%m-%d`
     $  echo $d
     $  ( cd insight/src/gdb && cvs -f -q update )
     $  ( cd insight/src && cvs -f -q tag gdb_5_2-$d-release )

Insight is used since that contains more of the release than gdb (dejagnu doesn't get tagged but I think we can live with that).

Mention the release on the trunk

Just put something in the ChangeLog so that the trunk also indicates when the release was made.

Restart gdb/

If gdb/ does not contain an ISO date such as 2002-01-24 then the daily cronjob won't update it. Having committed all the release changes it can be set to 5.2.0_0000-00-00-cvs which will restart things (yes the _ is important - it affects the snapshot process).

Don't forget the ChangeLog.

Merge into trunk

The files committed to the branch may also need changes merged into the trunk.

Revise the release schedule

Post a revised release schedule to GDB Discussion List with an updated announcement. The schedule can be generated by running:

     $  ~/ss/schedule `date +%s` schedule

The first parameter is approximate date/time in seconds (from the epoch) of the most recent release.

Also update the schedule cronjob.

16.7 Post release

Remove any OBSOLETE code.

Next: , Previous: Releasing GDB, Up: Top

17 Testsuite

The testsuite is an important component of the gdb package. While it is always worthwhile to encourage user testing, in practice this is rarely sufficient; users typically use only a small subset of the available commands, and it has proven all too common for a change to cause a significant regression that went unnoticed for some time.

The gdb testsuite uses the DejaGNU testing framework. DejaGNU is built using Tcl and expect. The tests themselves are calls to various Tcl procs; the framework runs all the procs and summarizes the passes and fails.

17.1 Using the Testsuite

To run the testsuite, simply go to the gdb object directory (or to the testsuite's objdir) and type make check. This just sets up some environment variables and invokes DejaGNU's runtest script. While the testsuite is running, you'll get mentions of which test file is in use, and a mention of any unexpected passes or fails. When the testsuite is finished, you'll get a summary that looks like this:

                     === gdb Summary ===
     # of expected passes            6016
     # of unexpected failures        58
     # of unexpected successes       5
     # of expected failures          183
     # of unresolved testcases       3
     # of untested testcases         5

The ideal test run consists of expected passes only; however, reality conspires to keep us from this ideal. Unexpected failures indicate real problems, whether in gdb or in the testsuite. Expected failures are still failures, but ones which have been decided are too hard to deal with at the time; for instance, a test case might work everywhere except on AIX, and there is no prospect of the AIX case being fixed in the near future. Expected failures should not be added lightly, since you may be masking serious bugs in gdb. Unexpected successes are expected fails that are passing for some reason, while unresolved and untested cases often indicate some minor catastrophe, such as the compiler being unable to deal with a test program.

When making any significant change to gdb, you should run the testsuite before and after the change, to confirm that there are no regressions. Note that truly complete testing would require that you run the testsuite with all supported configurations and a variety of compilers; however this is more than really necessary. In many cases testing with a single configuration is sufficient. Other useful options are to test one big-endian (Sparc) and one little-endian (x86) host, a cross config with a builtin simulator (powerpc-eabi, mips-elf), or a 64-bit host (Alpha).

If you add new functionality to gdb, please consider adding tests for it as well; this way future gdb hackers can detect and fix their changes that break the functionality you added. Similarly, if you fix a bug that was not previously reported as a test failure, please add a test case for it. Some cases are extremely difficult to test, such as code that handles host OS failures or bugs in particular versions of compilers, and it's OK not to try to write tests for all of those.

DejaGNU supports separate build, host, and target machines. However, some gdb test scripts do not work if the build machine and the host machine are not the same. In such an environment, these scripts will give a result of “UNRESOLVED”, like this:

     UNRESOLVED: gdb.base/example.exp: This test script does not work on a remote host.

17.2 Testsuite Organization

The testsuite is entirely contained in gdb/testsuite. While the testsuite includes some makefiles and configury, these are very minimal, and used for little besides cleaning up, since the tests themselves handle the compilation of the programs that gdb will run. The file testsuite/lib/gdb.exp contains common utility procs useful for all gdb tests, while the directory testsuite/config contains configuration-specific files, typically used for special-purpose definitions of procs like gdb_load and gdb_start.

The tests themselves are to be found in testsuite/gdb.* and subdirectories of those. The names of the test files must always end with .exp. DejaGNU collects the test files by wildcarding in the test directories, so both subdirectories and individual files get chosen and run in alphabetical order.

The following table lists the main types of subdirectories and what they are for. Since DejaGNU finds test files no matter where they are located, and since each test file sets up its own compilation and execution environment, this organization is simply for convenience and intelligibility.

This is the base testsuite. The tests in it should apply to all configurations of gdb (but generic native-only tests may live here). The test programs should be in the subset of C that is valid K&R, ANSI/ISO, and C++ (#ifdefs are allowed if necessary, for instance for prototypes).
Language-specific tests for any language lang besides C. Examples are gdb.cp and
Non-portable tests. The tests are specific to a specific configuration (host or target), such as HP-UX or eCos. Example is gdb.hp, for HP-UX.
Tests specific to a particular compiler. As of this writing (June 1999), there aren't currently any groups of tests in this category that couldn't just as sensibly be made platform-specific, but one could imagine a gdb.gcc, for tests of gdb's handling of GCC extensions.
Tests that exercise a specific gdb subsystem in more depth. For instance, gdb.disasm exercises various disassemblers, while gdb.stabs tests pathways through the stabs symbol reader.

17.3 Writing Tests

In many areas, the gdb tests are already quite comprehensive; you should be able to copy existing tests to handle new cases.

You should try to use gdb_test whenever possible, since it includes cases to handle all the unexpected errors that might happen. However, it doesn't cost anything to add new test procedures; for instance, gdb.base/exprs.exp defines a test_expr that calls gdb_test multiple times.

Only use send_gdb and gdb_expect when absolutely necessary, such as when gdb has several valid responses to a command.

The source language programs do not need to be in a consistent style. Since gdb is used to debug programs written in many different styles, it's worth having a mix of styles in the testsuite; for instance, some gdb bugs involving the display of source lines would never manifest themselves if the programs used GNU coding style uniformly.

Next: , Previous: Testsuite, Up: Top

18 Hints

Check the README file, it often has useful information that does not appear anywhere else in the directory.

Up: Hints

18.1 Getting Started

gdb is a large and complicated program, and if you first starting to work on it, it can be hard to know where to start. Fortunately, if you know how to go about it, there are ways to figure out what is going on.

This manual, the gdb Internals manual, has information which applies generally to many parts of gdb.

Information about particular functions or data structures are located in comments with those functions or data structures. If you run across a function or a global variable which does not have a comment correctly explaining what is does, this can be thought of as a bug in gdb; feel free to submit a bug report, with a suggested comment if you can figure out what the comment should say. If you find a comment which is actually wrong, be especially sure to report that.

Comments explaining the function of macros defined in host, target, or native dependent files can be in several places. Sometimes they are repeated every place the macro is defined. Sometimes they are where the macro is used. Sometimes there is a header file which supplies a default definition of the macro, and the comment is there. This manual also documents all the available macros.

Start with the header files. Once you have some idea of how gdb's internal symbol tables are stored (see symtab.h, gdbtypes.h), you will find it much easier to understand the code which uses and creates those symbol tables.

You may wish to process the information you are getting somehow, to enhance your understanding of it. Summarize it, translate it to another language, add some (perhaps trivial or non-useful) feature to gdb, use the code to predict what a test case would do and write the test case and verify your prediction, etc. If you are reading code and your eyes are starting to glaze over, this is a sign you need to use a more active approach.

Once you have a part of gdb to start with, you can find more specifically the part you are looking for by stepping through each function with the next command. Do not use step or you will quickly get distracted; when the function you are stepping through calls another function try only to get a big-picture understanding (perhaps using the comment at the beginning of the function being called) of what it does. This way you can identify which of the functions being called by the function you are stepping through is the one which you are interested in. You may need to examine the data structures generated at each stage, with reference to the comments in the header files explaining what the data structures are supposed to look like.

Of course, this same technique can be used if you are just reading the code, rather than actually stepping through it. The same general principle applies—when the code you are looking at calls something else, just try to understand generally what the code being called does, rather than worrying about all its details.

A good place to start when tracking down some particular area is with a command which invokes that feature. Suppose you want to know how single-stepping works. As a gdb user, you know that the step command invokes single-stepping. The command is invoked via command tables (see command.h); by convention the function which actually performs the command is formed by taking the name of the command and adding `_command', or in the case of an info subcommand, `_info'. For example, the step command invokes the step_command function and the info display command invokes display_info. When this convention is not followed, you might have to use grep or M-x tags-search in emacs, or run gdb on itself and set a breakpoint in execute_command.

If all of the above fail, it may be appropriate to ask for information on bug-gdb. But never post a generic question like “I was wondering if anyone could give me some tips about understanding gdb”—if we had some magic secret we would put it in this manual. Suggestions for improving the manual are always welcome, of course.

Up: Hints

18.2 Debugging gdb with itself

If gdb is limping on your machine, this is the preferred way to get it fully functional. Be warned that in some ancient Unix systems, like Ultrix 4.2, a program can't be running in one process while it is being debugged in another. Rather than typing the command ./gdb ./gdb, which works on Suns and such, you can copy gdb to gdb2 and then type ./gdb ./gdb2.

When you run gdb in the gdb source directory, it will read a .gdbinit file that sets up some simple things to make debugging gdb easier. The info command, when executed without a subcommand in a gdb being debugged by gdb, will pop you back up to the top level gdb. See .gdbinit for details.

If you use emacs, you will probably want to do a make TAGS after you configure your distribution; this will put the machine dependent routines for your local machine where they will be accessed first by M-.

Also, make sure that you've either compiled gdb with your local cc, or have run fixincludes if you are compiling with gcc.

18.3 Submitting Patches

Thanks for thinking of offering your changes back to the community of gdb users. In general we like to get well designed enhancements. Thanks also for checking in advance about the best way to transfer the changes.

The gdb maintainers will only install “cleanly designed” patches. This manual summarizes what we believe to be clean design for gdb.

If the maintainers don't have time to put the patch in when it arrives, or if there is any question about a patch, it goes into a large queue with everyone else's patches and bug reports.

The legal issue is that to incorporate substantial changes requires a copyright assignment from you and/or your employer, granting ownership of the changes to the Free Software Foundation. You can get the standard documents for doing this by sending mail to and asking for it. We recommend that people write in "All programs owned by the Free Software Foundation" as "NAME OF PROGRAM", so that changes in many programs (not just gdb, but GAS, Emacs, GCC, etc) can be contributed with only one piece of legalese pushed through the bureaucracy and filed with the FSF. We can't start merging changes until this paperwork is received by the FSF (their rules, which we follow since we maintain it for them).

Technically, the easiest way to receive changes is to receive each feature as a small context diff or unidiff, suitable for patch. Each message sent to me should include the changes to C code and header files for a single feature, plus ChangeLog entries for each directory where files were modified, and diffs for any changes needed to the manuals (gdb/doc/gdb.texinfo or gdb/doc/gdbint.texinfo). If there are a lot of changes for a single feature, they can be split down into multiple messages.

In this way, if we read and like the feature, we can add it to the sources with a single patch command, do some testing, and check it in. If you leave out the ChangeLog, we have to write one. If you leave out the doc, we have to puzzle out what needs documenting. Etc., etc.

The reason to send each change in a separate message is that we will not install some of the changes. They'll be returned to you with questions or comments. If we're doing our job correctly, the message back to you will say what you have to fix in order to make the change acceptable. The reason to have separate messages for separate features is so that the acceptable changes can be installed while one or more changes are being reworked. If multiple features are sent in a single message, we tend to not put in the effort to sort out the acceptable changes from the unacceptable, so none of the features get installed until all are acceptable.

If this sounds painful or authoritarian, well, it is. But we get a lot of bug reports and a lot of patches, and many of them don't get installed because we don't have the time to finish the job that the bug reporter or the contributor could have done. Patches that arrive complete, working, and well designed, tend to get installed on the day they arrive. The others go into a queue and get installed as time permits, which, since the maintainers have many demands to meet, may not be for quite some time.

Please send patches directly to the gdb maintainers.

18.4 Obsolete Conditionals

Fragments of old code in gdb sometimes reference or set the following configuration macros. They should not be used by new code, and old uses should be removed as those parts of the debugger are otherwise touched.

This macro used to define where the end of the stack appeared, for use in interpreting core file formats that don't record this address in the core file itself. This information is now configured in BFD, and gdb gets the info portably from there. The values in gdb's configuration files should be moved into BFD configuration files (if needed there), and deleted from all of gdb's config files.

Any foo-xdep.c file that references STACK_END_ADDR is so old that it has never been converted to use BFD. Now that's old!

Next: , Previous: Hints, Up: Top

Appendix A gdb Currently available observers

A.1 Implementation rationale

An observer is an entity which is interested in being notified when GDB reaches certain states, or certain events occur in GDB. The entity being observed is called the subject. To receive notifications, the observer attaches a callback to the subject. One subject can have several observers.

observer.c implements an internal generic low-level event notification mechanism. This generic event notification mechanism is then re-used to implement the exported high-level notification management routines for all possible notifications.

The current implementation of the generic observer provides support for contextual data. This contextual data is given to the subject when attaching the callback. In return, the subject will provide this contextual data back to the observer as a parameter of the callback.

Note that the current support for the contextual data is only partial, as it lacks a mechanism that would deallocate this data when the callback is detached. This is not a problem so far, as this contextual data is only used internally to hold a function pointer. Later on, if a certain observer needs to provide support for user-level contextual data, then the generic notification mechanism will need to be enhanced to allow the observer to provide a routine to deallocate the data when attaching the callback.

The observer implementation is also currently not reentrant. In particular, it is therefore not possible to call the attach or detach routines during a notification.

A.2 Debugging

Observer notifications can be traced using the command `set debug observer 1' (see Optional messages about internal happenings).

A.3 normal_stop Notifications

gdb notifies all normal_stop observers when the inferior execution has just stopped, the associated messages and annotations have been printed, and the control is about to be returned to the user.

Note that the normal_stop notification is not emitted when the execution stops due to a breakpoint, and this breakpoint has a condition that is not met. If the breakpoint has any associated commands list, the commands are executed after the notification is emitted.

The following interfaces are available to manage observers:

— Function: extern struct observer *observer_attach_event (observer_event_ftype *f)

Using the function f, create an observer that is notified when ever event occures, return the observer.

— Function: extern void observer_detach_event (struct observer *observer);

Remove observer from the list of observers to be notified when event occurs.

— Function: extern void observer_notify_event (void);

Send a notification to all event observers.

The following observable events are defined:

— Function: void normal_stop (struct bpstats *bs)

The inferior has stopped for real.

— Function: void target_changed (struct target_ops *target)

The target's register contents have changed.

— Function: void executable_changed (void *unused_args)

The executable being debugged by GDB has changed: The user decided to debug a different program, or the program he was debugging has been modified since being loaded by the debugger (by being recompiled, for instance).

— Function: void inferior_created (struct target_ops *objfile, int from_tty)

gdb has just connected to an inferior. For `run', gdb calls this observer while the inferior is still stopped at the entry-point instruction. For `attach' and `core', gdb calls this observer immediately after connecting to the inferior, and before any information on the inferior has been printed.

— Function: void solib_loaded (struct so_list *solib)

The shared library specified by solib has been loaded. Note that when gdb calls this observer, the library's symbols probably haven't been loaded yet.

— Function: void solib_unloaded (struct so_list *solib)

The shared library specified by solib has been unloaded.

Next: , Previous: GDB Observers, Up: Top

Appendix B GNU Free Documentation License

Version 1.2, November 2002
     Copyright © 2000,2001,2002 Free Software Foundation, Inc.
     59 Temple Place, Suite 330, Boston, MA  02111-1307, USA
     Everyone is permitted to copy and distribute verbatim copies
     of this license document, but changing it is not allowed.

    The purpose of this License is to make a manual, textbook, or other functional and useful document free in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others.

    This License is a kind of “copyleft”, which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software.

    We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference.


    This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The “Document”, below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as “you”. You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law.

    A “Modified Version” of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language.

    A “Secondary Section” is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them.

    The “Invariant Sections” are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none.

    The “Cover Texts” are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words.

    A “Transparent” copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not “Transparent” is called “Opaque”.

    Examples of suitable formats for Transparent copies include plain ascii without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only.

    The “Title Page” means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, “Title Page” means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text.

    A section “Entitled XYZ” means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as “Acknowledgements”, “Dedications”, “Endorsements”, or “History”.) To “Preserve the Title” of such a section when you modify the Document means that it remains a section “Entitled XYZ” according to this definition.

    The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License.


    You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3.

    You may also lend copies, under the same conditions stated above, and you may publicly display copies.


    If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects.

    If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages.

    If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public.

    It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document.


    You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version:

    1. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission.
    2. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement.
    3. State on the Title page the name of the publisher of the Modified Version, as the publisher.
    4. Preserve all the copyright notices of the Document.
    5. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices.
    6. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below.
    7. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice.
    8. Include an unaltered copy of this License.
    9. Preserve the section Entitled “History”, Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled “History” in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence.
    10. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the “History” section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission.
    11. For any section Entitled “Acknowledgements” or “Dedications”, Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein.
    12. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles.
    13. Delete any section Entitled “Endorsements”. Such a section may not be included in the Modified Version.
    14. Do not retitle any existing section to be Entitled “Endorsements” or to conflict in title with any Invariant Section.
    15. Preserve any Warranty Disclaimers.

    If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles.

    You may add a section Entitled “Endorsements”, provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard.

    You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one.

    The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version.


    You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers.

    The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work.

    In the combination, you must combine any sections Entitled “History” in the various original documents, forming one section Entitled “History”; likewise combine any sections Entitled “Acknowledgements”, and any sections Entitled “Dedications”. You must delete all sections Entitled “Endorsements.”


    You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects.

    You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document.


    A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an “aggregate” if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document.

    If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate.


    Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail.

    If a section in the Document is Entitled “Acknowledgements”, “Dedications”, or “History”, the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title.


    You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.


    The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See

    Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License “or any later version” applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.

B.1 ADDENDUM: How to use this License for your documents

To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page:

       Copyright (C)  year  your name.
       Permission is granted to copy, distribute and/or modify this document
       under the terms of the GNU Free Documentation License, Version 1.2
       or any later version published by the Free Software Foundation;
       with no Invariant Sections, no Front-Cover Texts, and no Back-Cover
       Texts.  A copy of the license is included in the section entitled ``GNU
       Free Documentation License''.

If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with...Texts.” line with this:

         with the Invariant Sections being list their titles, with
         the Front-Cover Texts being list, and with the Back-Cover Texts
         being list.

If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation.

If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software.

Previous: GNU Free Documentation License, Up: Top



[1] The function cast is not portable ISO C.

[2] As of this writing (April 2001), setting verbosity level is not yet implemented, and is always returned as zero. So calling ui_out_message with a verbosity argument more than zero will cause the message to never be printed.

[3] Some D10V instructions are actually pairs of 16-bit sub-instructions. However, since you can't jump into the middle of such a pair, code addresses can only refer to full 32 bit instructions, which is what matters in this explanation.

[4] The above is from the original example and uses K&R C. gdb has since converted to ISO C but lets ignore that.