SYNOPSIS
--------
-'git-branch' [(-d | -D) <branchname>] | [[-f] <branchname> [<start-point>]]
+[verse]
+'git-branch' [[-f] <branchname> [<start-point>]]
+'git-branch' (-d | -D) <branchname>
DESCRIPTION
-----------
If no argument is provided, show available branches and mark current
branch with star. Otherwise, create a new branch of name <branchname>.
-
If a starting point is also specified, that will be where the branch is
created, otherwise it will be created at the current HEAD.
+With a `-d` or `-D` option, `<branchname>` will be deleted.
+
+
OPTIONS
-------
-d::
Examples
~~~~~~~~
-Start development off of a know tag::
+Start development off of a known tag::
+
------------
$ git clone git://git.kernel.org/pub/scm/.../linux-2.6 my2.6
SYNOPSIS
--------
-'git-checkout' [-f] [-b <new_branch>] [-m] [<branch>] [<paths>...]
+[verse]
+'git-checkout' [-f] [-b <new_branch>] [-m] [<branch>]
+'git-checkout' [-m] [<branch>] <paths>...
DESCRIPTION
-----------
-When <paths> are not given, this command switches branches, by
+When <paths> are not given, this command switches branches by
updating the index and working tree to reflect the specified
branch, <branch>, and updating HEAD to be <branch> or, if
-specified, <new_branch>.
+specified, <new_branch>. Using -b will cause <new_branch> to
+be created.
When <paths> are given, this command does *not* switch
branches. It updates the named paths in the working tree from
OPTIONS
-------
-f::
- Force an re-read of everything.
+ Force a re-read of everything.
-b::
Create a new branch and start it at <branch>.
-m::
- If you have local modifications to a file that is
- different between the current branch and the branch you
- are switching to, the command refuses to switch
- branches, to preserve your modifications in context.
- With this option, a three-way merge between the current
+ If you have local modifications to one or more files that
+ are different between the current branch and the branch to
+ which you are switching, the command refuses to switch
+ branches in order to preserve your modifications in context.
+ However, with this option, a three-way merge between the current
branch, your working tree contents, and the new branch
is done, and you will be on the new branch.
+
------------
. After working in a wrong branch, switching to the correct
-branch you would want to is done with:
+branch would be done using:
+
------------
$ git checkout mytopic
VISUAL and EDITOR environment variables to edit the commit log
message.
+Several environment variable are used during commits. They are
+documented in gitlink:git-commit-tree[1].
+
+
This command can run `commit-msg`, `pre-commit`, and
`post-commit` hooks. See link:hooks.html[hooks] for more
information.
CVS by default uses the unix username when writing its
commit logs. Using this option and an author-conv-file
in this format
-
++
+---------
exon=Andreas Ericsson <ae@op5.se>
spawn=Simon Pawn <spawn@frog-pond.org>
- git-cvsimport will make it appear as those authors had
- their GIT_AUTHOR_NAME and GIT_AUTHOR_EMAIL set properly
- all along.
-
- For convenience, this data is saved to $GIT_DIR/cvs-authors
- each time the -A option is provided and read from that same
- file each time git-cvsimport is run.
-
- It is not recommended to use this feature if you intend to
- export changes back to CVS again later with
- git-link[1]::git-cvsexportcommit.
+---------
++
+git-cvsimport will make it appear as those authors had
+their GIT_AUTHOR_NAME and GIT_AUTHOR_EMAIL set properly
+all along.
++
+For convenience, this data is saved to $GIT_DIR/cvs-authors
+each time the -A option is provided and read from that same
+file each time git-cvsimport is run.
++
+It is not recommended to use this feature if you intend to
+export changes back to CVS again later with
+git-link[1]::git-cvsexportcommit.
OUTPUT
------
--------
[verse]
'git-fsck-objects' [--tags] [--root] [--unreachable] [--cache]
- [--standalone | --full] [--strict] [<object>*]
+ [--full] [--strict] [<object>*]
DESCRIPTION
-----------
Consider any object recorded in the index also as a head node for
an unreachability trace.
---standalone::
- Limit checks to the contents of GIT_OBJECT_DIRECTORY
- ($GIT_DIR/objects), making sure that it is consistent and
- complete without referring to objects found in alternate
- object pools listed in GIT_ALTERNATE_OBJECT_DIRECTORIES,
- nor packed git archives found in $GIT_DIR/objects/pack;
- cannot be used with --full.
-
--full::
Check not just objects in GIT_OBJECT_DIRECTORY
($GIT_DIR/objects), but also the ones found in alternate
- object pools listed in GIT_ALTERNATE_OBJECT_DIRECTORIES,
+ object pools listed in GIT_ALTERNATE_OBJECT_DIRECTORIES
+ or $GIT_DIR/objects/info/alternates,
and in packed git archives found in $GIT_DIR/objects/pack
and corresponding pack subdirectories in alternate
- object pools; cannot be used with --standalone.
+ object pools.
--strict::
Enable more strict checking, namely to catch a file mode
<option>...::
Either an option to pass to `grep` or `git-ls-files`.
-
- The following are the specific `git-ls-files` options
- that may be given: `-o`, `--cached`, `--deleted`, `--others`,
- `--killed`, `--ignored`, `--modified`, `--exclude=*`,
- `--exclude-from=*`, and `--exclude-per-directory=*`.
-
- All other options will be passed to `grep`.
++
+The following are the specific `git-ls-files` options
+that may be given: `-o`, `--cached`, `--deleted`, `--others`,
+`--killed`, `--ignored`, `--modified`, `--exclude=\*`,
+`--exclude-from=\*`, and `--exclude-per-directory=\*`.
++
+All other options will be passed to `grep`.
<pattern>::
The pattern to look for. The first non option is taken
OPTIONS
-------
--template=<template_directory>::
- Provide the directory in from which templates will be used.
+ Provide the directory from which templates will be used.
+ The default template directory is `/usr/share/git-core/templates`.
--shared::
Specify that the git repository is to be shared amongst several users.
DESCRIPTION
-----------
-This simply creates an empty git repository - basically a `.git` directory
-and `.git/object/??/`, `.git/refs/heads` and `.git/refs/tags` directories,
-and links `.git/HEAD` symbolically to `.git/refs/heads/master`.
+This command creates an empty git repository - basically a `.git` directory
+with subdirectories for `objects`, `refs/heads`, `refs/tags`, and
+templated files.
+An initial `HEAD` file that references the HEAD of the master branch
+is also created.
+
+If `--template=<template_directory>` is specified, `<template_directory>`
+is used as the source of the template files rather than the default.
+The template files include some directory structure, some suggested
+"exclude patterns", and copies of non-executing "hook" files. The
+suggested patterns and hook files are all modifiable and extensible.
If the `$GIT_DIR` environment variable is set then it specifies a path
to use instead of `./.git` for the base of the repository.
is set to 'true' so that directories under `$GIT_DIR` are made group writable
(and g+sx, since the git group may be not the primary group of all users).
-
Running `git-init-db` in an existing repository is safe. It will not overwrite
things that are already there. The primary reason for rerunning `git-init-db`
is to pick up newly added templates.
/
D---E---F---G master
-From this point, the result of the following commands:
+From this point, the result of either of the following commands:
git-rebase master
git-rebase master topic
/
D---E---F---G master
-While, starting from the same point, the result of the following
+While, starting from the same point, the result of either of the following
commands:
git-rebase --onto master~1 master
<upstream>::
Upstream branch to compare against.
-<head>::
+<branch>::
Working branch; defaults to HEAD.
Author
------------
-With this,`git show-branch` without extra parameters would show
+With this, `git show-branch` without extra parameters would show
only the primary branches. In addition, if you happen to be on
your topic branch, it is shown as well.
-A <author_file>::
Read a file with lines on the form
++
+------
+ username = User's Full Name <email@addr.es>
- username = User's Full Name <email@addr.es>
-
- and use "User's Full Name <email@addr.es>" as the GIT
- author and committer for Subversion commits made by
- "username". If encountering a commit made by a user not in the
- list, abort.
-
- For convenience, this data is saved to $GIT_DIR/svn-authors
- each time the -A option is provided, and read from that same
- file each time git-svnimport is run with an existing GIT
- repository without -A.
+------
++
+and use "User's Full Name <email@addr.es>" as the GIT
+author and committer for Subversion commits made by
+"username". If encountering a commit made by a user not in the
+list, abort.
++
+For convenience, this data is saved to $GIT_DIR/svn-authors
+each time the -A option is provided, and read from that same
+file each time git-svnimport is run with an existing GIT
+repository without -A.
-m::
Attempt to detect merges based on the commit message. This option
By default, differences for merge commits are not shown.
With this flag, show differences to that commit from all
of its parents.
-
- However, it is not very useful in general, although it
- *is* useful on a file-by-file basis.
++
+However, it is not very useful in general, although it
+*is* useful on a file-by-file basis.
Examples
--------
gitlink:git-shortlog[1]::
Summarizes 'git log' output.
+gitlink:git-show[1]::
+ Show one commit log and its diff.
+
gitlink:git-show-branch[1]::
Show branches and their commits.
a valid head 'name'
(i.e. the contents of `$GIT_DIR/refs/heads/<head>`).
-<snap>::
- a valid snapshot 'name'
- (i.e. the contents of `$GIT_DIR/refs/snap/<snap>`).
-
File/Directory Structure
------------------------
Please see link:repository-layout.html[repository layout] document.
+Read link:hooks.html[hooks] for more details about each hook.
+
Higher level SCMs may provide and manage additional information in the
`$GIT_DIR`.
update
------
-This hook is invoked by `git-receive-pack`, which is invoked
-when a `git push` is done against the repository. It takes
-three parameters, name of the ref being updated, old object name
-stored in the ref, and the new objectname to be stored in the
-ref. Exiting with non-zero status from this hook prevents
-`git-receive-pack` from updating the ref.
-
-This can be used to prevent 'forced' update on certain refs by
+This hook is invoked by `git-receive-pack` on the remote repository,
+which is happens when a `git push` is done on a local repository.
+Just before updating the ref on the remote repository, the update hook
+is invoked. It's exit status determins the success or failure of
+the ref update.
+
+The hook executes once for each ref to be updated, and takes
+three parameters:
+ - the name of the ref being updated,
+ - the old object name stored in the ref,
+ - and the new objectname to be stored in the ref.
+
+A zero exit from the update hook allows the ref to be updated.
+Exiting with a non-zero status prevents `git-receive-pack`
+from updating the ref.
+
+This hook can be used to prevent 'forced' update on certain refs by
making sure that the object name is a commit object that is a
descendant of the commit object named by the old object name.
+That is, to enforce a "fast forward only" policy.
+
+It could also be used to log the old..new status. However, it
+does not know the entire set of branches, so it would end up
+firing one e-mail per ref when used naively, though.
+
Another use suggested on the mailing list is to use this hook to
implement access control which is finer grained than the one
based on filesystem group.
want to report something to the git-send-pack on the other end,
you can redirect your output to your stderr.
+
post-update
-----------
-This hook is invoked by `git-receive-pack`, which is invoked
-when a `git push` is done against the repository. It takes
-variable number of parameters; each of which is the name of ref
-that was actually updated.
+This hook is invoked by `git-receive-pack` on the remote repository,
+which is happens when a `git push` is done on a local repository.
+It executes on the remote repository once after all the refs have
+been updated.
+
+It takes a variable number of parameters, each of which is the
+name of ref that was actually updated.
This hook is meant primarily for notification, and cannot affect
the outcome of `git-receive-pack`.
+The post-update hook can tell what are the heads that were pushed,
+but it does not know what their original and updated values are,
+so it is a poor place to do log old..new.
+
The default post-update hook, when enabled, runs
`git-update-server-info` to keep the information used by dumb
-transport up-to-date.
+transports (eg, http) up-to-date. If you are publishing
+a git repository that is accessible via http, you should
+probably enable this hook.
The standard output of this hook is sent to /dev/null; if you
want to report something to the git-send-pack on the other end,
commands. A handful of sample hooks are installed when
`git init-db` is run, but all of them are disabled by
default. To enable, they need to be made executable.
+ Read link:hooks.html[hooks] for more details about
+ each hook.
index::
The current index file for the repository. It is
doc:
$(MAKE) -C Documentation all
+TAGS:
+ rm -f TAGS
+ find . -name '*.[hcS]' -print | xargs etags -a
+
+tags:
+ rm -f tags
+ find . -name '*.[hcS]' -print | xargs ctags -a
### Testing rules
clean:
rm -f *.o mozilla-sha1/*.o arm/*.o ppc/*.o compat/*.o $(LIB_FILE)
rm -f $(ALL_PROGRAMS) git$X
- rm -f *.spec *.pyc *.pyo */*.pyc */*.pyo common-cmds.h
+ rm -f *.spec *.pyc *.pyo */*.pyc */*.pyo common-cmds.h TAGS tags
rm -rf $(GIT_TARNAME)
rm -f $(GIT_TARNAME).tar.gz git-core_$(GIT_VERSION)-*.tar.gz
$(MAKE) -C Documentation/ clean
rm -f GIT-VERSION-FILE
.PHONY: all install clean strip
-.PHONY: .FORCE-GIT-VERSION-FILE
+.PHONY: .FORCE-GIT-VERSION-FILE TAGS tags
line += digits;
len -= digits;
- *p2 = *p1;
+ *p2 = 1;
if (*line == ',') {
digits = parse_num(line+1, p2);
if (!digits)
patch->new_name = NULL;
}
- if (patch->is_new != !oldlines)
+ if (patch->is_new && oldlines)
return error("new file depends on old contents");
if (patch->is_delete != !newlines) {
if (newlines)
break;
}
}
+ if (oldlines || newlines)
+ return -1;
/* If a fragment ends with an incomplete line, we failed to include
* it in the above loop because we hit oldlines == newlines == 0
* before seeing it.
#include "tree.h"
#include "blob.h"
#include "diff.h"
+#include "diffcore.h"
#include "revision.h"
#define DEBUG 0
char *buf;
unsigned long size;
int num_lines;
-// const char* path;
+ const char* pathname;
+
+ void* topo_data;
};
struct chunk {
unsigned mode, int stage);
static unsigned char blob_sha1[20];
+static const char* blame_file;
static int get_blob_sha1(struct tree *t, const char *pathname,
unsigned char *sha1)
{
int i;
const char *pathspec[2];
+ blame_file = pathname;
pathspec[0] = pathname;
pathspec[1] = NULL;
memset(blob_sha1, 0, sizeof(blob_sha1));
if (S_ISDIR(mode))
return READ_TREE_RECURSIVE;
+ if (strncmp(blame_file, base, baselen) ||
+ strcmp(blame_file + baselen, pathname))
+ return -1;
+
memcpy(blob_sha1, sha1, 20);
return -1;
}
return info->line_map[line];
}
-static int fill_util_info(struct commit *commit, const char *path)
+static struct util_info* get_util(struct commit *commit)
{
- struct util_info *util;
- if (commit->object.util)
- return 0;
+ struct util_info *util = commit->object.util;
+
+ if (util)
+ return util;
util = xmalloc(sizeof(struct util_info));
+ util->buf = NULL;
+ util->size = 0;
+ util->line_map = NULL;
+ util->num_lines = -1;
+ util->pathname = NULL;
+ commit->object.util = util;
+ return util;
+}
+
+static int fill_util_info(struct commit *commit)
+{
+ struct util_info *util = commit->object.util;
+
+ assert(util);
+ assert(util->pathname);
- if (get_blob_sha1(commit->tree, path, util->sha1)) {
- free(util);
+ if (get_blob_sha1(commit->tree, util->pathname, util->sha1))
return 1;
- } else {
- util->buf = NULL;
- util->size = 0;
- util->line_map = NULL;
- util->num_lines = -1;
- commit->object.util = util;
+ else
return 0;
- }
}
static void alloc_line_map(struct commit *commit)
static void init_first_commit(struct commit* commit, const char* filename)
{
- struct util_info* util;
+ struct util_info* util = commit->object.util;
int i;
- if (fill_util_info(commit, filename))
+ util->pathname = filename;
+ if (fill_util_info(commit))
die("fill_util_info failed");
alloc_line_map(commit);
if(num_parents == 0)
*initial = commit;
- if(fill_util_info(commit, path))
+ if (fill_util_info(commit))
continue;
alloc_line_map(commit);
printf("parent: %s\n",
sha1_to_hex(parent->object.sha1));
- if(fill_util_info(parent, path)) {
+ if (fill_util_info(parent)) {
num_parents--;
continue;
}
} while ((commit = get_revision(rev)) != NULL);
}
+
+static int compare_tree_path(struct rev_info* revs,
+ struct commit* c1, struct commit* c2)
+{
+ const char* paths[2];
+ struct util_info* util = c2->object.util;
+ paths[0] = util->pathname;
+ paths[1] = NULL;
+
+ diff_tree_setup_paths(get_pathspec(revs->prefix, paths));
+ return rev_compare_tree(c1->tree, c2->tree);
+}
+
+
+static int same_tree_as_empty_path(struct rev_info *revs, struct tree* t1,
+ const char* path)
+{
+ const char* paths[2];
+ paths[0] = path;
+ paths[1] = NULL;
+
+ diff_tree_setup_paths(get_pathspec(revs->prefix, paths));
+ return rev_same_tree_as_empty(t1);
+}
+
+static const char* find_rename(struct commit* commit, struct commit* parent)
+{
+ struct util_info* cutil = commit->object.util;
+ struct diff_options diff_opts;
+ const char *paths[1];
+ int i;
+
+ if (DEBUG) {
+ printf("find_rename commit: %s ",
+ sha1_to_hex(commit->object.sha1));
+ puts(sha1_to_hex(parent->object.sha1));
+ }
+
+ diff_setup(&diff_opts);
+ diff_opts.recursive = 1;
+ diff_opts.detect_rename = DIFF_DETECT_RENAME;
+ paths[0] = NULL;
+ diff_tree_setup_paths(paths);
+ if (diff_setup_done(&diff_opts) < 0)
+ die("diff_setup_done failed");
+
+ diff_tree_sha1(commit->tree->object.sha1, parent->tree->object.sha1,
+ "", &diff_opts);
+ diffcore_std(&diff_opts);
+
+ for (i = 0; i < diff_queued_diff.nr; i++) {
+ struct diff_filepair *p = diff_queued_diff.queue[i];
+
+ if (p->status == 'R' && !strcmp(p->one->path, cutil->pathname)) {
+ if (DEBUG)
+ printf("rename %s -> %s\n", p->one->path, p->two->path);
+ return p->two->path;
+ }
+ }
+
+ return 0;
+}
+
+static void simplify_commit(struct rev_info *revs, struct commit *commit)
+{
+ struct commit_list **pp, *parent;
+
+ if (!commit->tree)
+ return;
+
+ if (!commit->parents) {
+ struct util_info* util = commit->object.util;
+ if (!same_tree_as_empty_path(revs, commit->tree,
+ util->pathname))
+ commit->object.flags |= TREECHANGE;
+ return;
+ }
+
+ pp = &commit->parents;
+ while ((parent = *pp) != NULL) {
+ struct commit *p = parent->item;
+
+ if (p->object.flags & UNINTERESTING) {
+ pp = &parent->next;
+ continue;
+ }
+
+ parse_commit(p);
+ switch (compare_tree_path(revs, p, commit)) {
+ case REV_TREE_SAME:
+ parent->next = NULL;
+ commit->parents = parent;
+ get_util(p)->pathname = get_util(commit)->pathname;
+ return;
+
+ case REV_TREE_NEW:
+ {
+
+ struct util_info* util = commit->object.util;
+ if (revs->remove_empty_trees &&
+ same_tree_as_empty_path(revs, p->tree,
+ util->pathname)) {
+ const char* new_name = find_rename(commit, p);
+ if (new_name) {
+ struct util_info* putil = get_util(p);
+ if (!putil->pathname)
+ putil->pathname = strdup(new_name);
+ } else {
+ *pp = parent->next;
+ continue;
+ }
+ }
+ }
+
+ /* fallthrough */
+ case REV_TREE_DIFFERENT:
+ pp = &parent->next;
+ if (!get_util(p)->pathname)
+ get_util(p)->pathname =
+ get_util(commit)->pathname;
+ continue;
+ }
+ die("bad tree compare for commit %s",
+ sha1_to_hex(commit->object.sha1));
+ }
+ commit->object.flags |= TREECHANGE;
+}
+
+
struct commit_info
{
char* author;
return time_buf;
}
+static void topo_setter(struct commit* c, void* data)
+{
+ struct util_info* util = c->object.util;
+ util->topo_data = data;
+}
+
+static void* topo_getter(struct commit* c)
+{
+ struct util_info* util = c->object.util;
+ return util->topo_data;
+}
+
int main(int argc, const char **argv)
{
int i;
int sha1_len = 8;
int compability = 0;
int options = 1;
+ struct commit* start_commit;
- int num_args;
const char* args[10];
struct rev_info rev;
struct commit_info ci;
const char *buf;
int max_digits;
+ int longest_file, longest_author;
+ int found_rename;
const char* prefix = setup_git_directory();
+ git_config(git_default_config);
for(i = 1; i < argc; i++) {
if(options) {
strcpy(filename_buf, filename);
filename = filename_buf;
- {
- struct commit* c;
- if (get_sha1(commit, sha1))
- die("get_sha1 failed, commit '%s' not found", commit);
- c = lookup_commit_reference(sha1);
-
- if (fill_util_info(c, filename)) {
- printf("%s not found in %s\n", filename, commit);
- return 1;
- }
+ if (get_sha1(commit, sha1))
+ die("get_sha1 failed, commit '%s' not found", commit);
+ start_commit = lookup_commit_reference(sha1);
+ get_util(start_commit)->pathname = filename;
+ if (fill_util_info(start_commit)) {
+ printf("%s not found in %s\n", filename, commit);
+ return 1;
}
- num_args = 0;
- args[num_args++] = NULL;
- args[num_args++] = "--topo-order";
- args[num_args++] = "--remove-empty";
- args[num_args++] = commit;
- args[num_args++] = "--";
- args[num_args++] = filename;
- args[num_args] = NULL;
- setup_revisions(num_args, args, &rev, "HEAD");
+ init_revisions(&rev);
+ rev.remove_empty_trees = 1;
+ rev.topo_order = 1;
+ rev.prune_fn = simplify_commit;
+ rev.topo_setter = topo_setter;
+ rev.topo_getter = topo_getter;
+ rev.limited = 1;
+
+ commit_list_insert(start_commit, &rev.commits);
+
+ args[0] = filename;
+ args[1] = NULL;
+ diff_tree_setup_paths(args);
prepare_revision_walk(&rev);
process_commits(&rev, filename, &initial);
for (max_digits = 1, i = 10; i <= num_blame_lines + 1; max_digits++)
i *= 10;
+ longest_file = 0;
+ longest_author = 0;
+ found_rename = 0;
for (i = 0; i < num_blame_lines; i++) {
struct commit *c = blame_lines[i];
+ struct util_info* u;
if (!c)
c = initial;
+ u = c->object.util;
+ if (!found_rename && strcmp(filename, u->pathname))
+ found_rename = 1;
+ if (longest_file < strlen(u->pathname))
+ longest_file = strlen(u->pathname);
+ get_commit_info(c, &ci);
+ if (longest_author < strlen(ci.author))
+ longest_author = strlen(ci.author);
+ }
+
+ for (i = 0; i < num_blame_lines; i++) {
+ struct commit *c = blame_lines[i];
+ struct util_info* u;
+
+ if (!c)
+ c = initial;
+
+ u = c->object.util;
get_commit_info(c, &ci);
fwrite(sha1_to_hex(c->object.sha1), sha1_len, 1, stdout);
- if(compability)
+ if(compability) {
printf("\t(%10s\t%10s\t%d)", ci.author,
format_time(ci.author_time, ci.author_tz), i+1);
- else
- printf(" (%-15.15s %10s %*d) ", ci.author,
+ } else {
+ if (found_rename)
+ printf(" %-*.*s", longest_file, longest_file,
+ u->pathname);
+ printf(" (%-*.*s %10s %*d) ",
+ longest_author, longest_author, ci.author,
format_time(ci.author_time, ci.author_tz),
max_digits, i+1);
+ }
if(i == num_blame_lines - 1) {
fwrite(buf, blame_len - (buf - blame_contents),
extern int trust_executable_bit;
extern int assume_unchanged;
extern int only_use_symrefs;
+extern int warn_ambiguous_refs;
extern int diff_rename_limit_default;
extern int shared_repository;
extern const char *apply_default_whitespace;
int opt;
setup_git_directory();
+ git_config(git_default_config);
if (argc != 3 || get_sha1(argv[2], sha1))
usage("git-cat-file [-t|-s|-e|-p|<type>] <sha1>");
while (fgets(comment, sizeof(comment), stdin) != NULL)
add_buffer(&buffer, &size, "%s", comment);
- write_sha1_file(buffer, size, "commit", commit_sha1);
- printf("%s\n", sha1_to_hex(commit_sha1));
- return 0;
+ if (!write_sha1_file(buffer, size, "commit", commit_sha1)) {
+ printf("%s\n", sha1_to_hex(commit_sha1));
+ return 0;
+ }
+ else
+ return 1;
}
return count;
}
+void topo_sort_default_setter(struct commit *c, void *data)
+{
+ c->object.util = data;
+}
+
+void *topo_sort_default_getter(struct commit *c)
+{
+ return c->object.util;
+}
+
/*
* Performs an in-place topological sort on the list supplied.
*/
void sort_in_topological_order(struct commit_list ** list, int lifo)
{
+ sort_in_topological_order_fn(list, lifo, topo_sort_default_setter,
+ topo_sort_default_getter);
+}
+
+void sort_in_topological_order_fn(struct commit_list ** list, int lifo,
+ topo_sort_set_fn_t setter,
+ topo_sort_get_fn_t getter)
+{
struct commit_list * next = *list;
struct commit_list * work = NULL, **insert;
struct commit_list ** pptr = list;
next=*list;
while (next) {
next_nodes->list_item = next;
- next->item->object.util = next_nodes;
+ setter(next->item, next_nodes);
next_nodes++;
next = next->next;
}
struct commit_list * parents = next->item->parents;
while (parents) {
struct commit * parent=parents->item;
- struct sort_node * pn = (struct sort_node *)parent->object.util;
-
+ struct sort_node * pn = (struct sort_node *) getter(parent);
+
if (pn)
pn->indegree++;
parents=parents->next;
next=*list;
insert = &work;
while (next) {
- struct sort_node * node = (struct sort_node *)next->item->object.util;
+ struct sort_node * node = (struct sort_node *) getter(next->item);
if (node->indegree == 0) {
insert = &commit_list_insert(next->item, insert)->next;
sort_by_date(&work);
while (work) {
struct commit * work_item = pop_commit(&work);
- struct sort_node * work_node = (struct sort_node *)work_item->object.util;
+ struct sort_node * work_node = (struct sort_node *) getter(work_item);
struct commit_list * parents = work_item->parents;
while (parents) {
struct commit * parent=parents->item;
- struct sort_node * pn = (struct sort_node *)parent->object.util;
-
+ struct sort_node * pn = (struct sort_node *) getter(parent);
+
if (pn) {
- /*
+ /*
* parents are only enqueued for emission
* when all their children have been emitted thereby
* guaranteeing topological order.
*pptr = work_node->list_item;
pptr = &(*pptr)->next;
*pptr = NULL;
- work_item->object.util = NULL;
+ setter(work_item, NULL);
}
free(nodes);
}
/*
* Performs an in-place topological sort of list supplied.
*
- * Pre-conditions:
+ * Pre-conditions for sort_in_topological_order:
* all commits in input list and all parents of those
* commits must have object.util == NULL
- *
- * Post-conditions:
+ *
+ * Pre-conditions for sort_in_topological_order_fn:
+ * all commits in input list and all parents of those
+ * commits must have getter(commit) == NULL
+ *
+ * Post-conditions:
* invariant of resulting list is:
* a reachable from b => ord(b) < ord(a)
* in addition, when lifo == 0, commits on parallel tracks are
* sorted in the dates order.
*/
+
+typedef void (*topo_sort_set_fn_t)(struct commit*, void *data);
+typedef void* (*topo_sort_get_fn_t)(struct commit*);
+
+void topo_sort_default_setter(struct commit *c, void *data);
+void *topo_sort_default_getter(struct commit *c);
+
void sort_in_topological_order(struct commit_list ** list, int lifo);
+void sort_in_topological_order_fn(struct commit_list ** list, int lifo,
+ topo_sort_set_fn_t setter,
+ topo_sort_get_fn_t getter);
#endif /* COMMIT_H */
return 0;
}
+ if (!strcmp(var, "core.warnambiguousrefs")) {
+ warn_ambiguous_refs = git_config_bool(var, value);
+ return 0;
+ }
+
if (!strcmp(var, "user.name")) {
strncpy(git_default_name, value, sizeof(git_default_name));
return 0;
(defcustom git-committer-name nil
"User name to use for commits.
-The default is to fall back to `add-log-full-name' and then `user-full-name'."
+The default is to fall back to the repository config, then to `add-log-full-name' and then to `user-full-name'."
:group 'git
:type '(choice (const :tag "Default" nil)
(string :tag "Name")))
(defcustom git-committer-email nil
"Email address to use for commits.
-The default is to fall back to `add-log-mailing-address' and then `user-mail-address'."
+The default is to fall back to the git repository config, then to `add-log-mailing-address' and then to `user-mail-address'."
:group 'git
:type '(choice (const :tag "Default" nil)
(string :tag "Email")))
(append (git-get-env-strings env) (list "git") args))
(apply #'call-process "git" nil buffer nil args)))
+(defun git-call-process-env-string (env &rest args)
+ "Wrapper for call-process that sets environment strings, and returns the process output as a string."
+ (with-temp-buffer
+ (and (eq 0 (apply #' git-call-process-env t env args))
+ (buffer-string))))
+
(defun git-run-process-region (buffer start end program args)
"Run a git process with a buffer region as input."
(let ((output-buffer (current-buffer))
(defun git-get-string-sha1 (string)
"Read a SHA1 from the specified string."
- (let ((pos (string-match "[0-9a-f]\\{40\\}" string)))
- (and pos (substring string pos (match-end 0)))))
+ (and string
+ (string-match "[0-9a-f]\\{40\\}" string)
+ (match-string 0 string)))
(defun git-get-committer-name ()
"Return the name to use as GIT_COMMITTER_NAME."
; copied from log-edit
(or git-committer-name
+ (git-repo-config "user.name")
(and (boundp 'add-log-full-name) add-log-full-name)
(and (fboundp 'user-full-name) (user-full-name))
(and (boundp 'user-full-name) user-full-name)))
"Return the email address to use as GIT_COMMITTER_EMAIL."
; copied from log-edit
(or git-committer-email
+ (git-repo-config "user.email")
(and (boundp 'add-log-mailing-address) add-log-mailing-address)
(and (fboundp 'user-mail-address) (user-mail-address))
(and (boundp 'user-mail-address) user-mail-address)))
(defun git-rev-parse (rev)
"Parse a revision name and return its SHA1."
(git-get-string-sha1
- (with-output-to-string
- (with-current-buffer standard-output
- (git-call-process-env t nil "rev-parse" rev)))))
+ (git-call-process-env-string nil "rev-parse" rev)))
+
+(defun git-repo-config (key)
+ "Retrieve the value associated to KEY in the git repository config file."
+ (let ((str (git-call-process-env-string nil "repo-config" key)))
+ (and str (car (split-string str "\n")))))
(defun git-symbolic-ref (ref)
"Wrapper for the git-symbolic-ref command."
- (car
- (split-string
- (with-output-to-string
- (with-current-buffer standard-output
- (git-call-process-env t nil "symbolic-ref" ref)))
- "\n")))
+ (let ((str (git-call-process-env-string nil "symbolic-ref" ref)))
+ (and str (car (split-string str "\n")))))
(defun git-update-ref (ref val &optional oldval)
"Update a reference by calling git-update-ref."
(defun git-write-tree (&optional index-file)
"Call git-write-tree and return the resulting tree SHA1 as a string."
(git-get-string-sha1
- (with-output-to-string
- (with-current-buffer standard-output
- (git-call-process-env t
- (if index-file `(("GIT_INDEX_FILE" . ,index-file)) nil)
- "write-tree")))))
+ (git-call-process-env-string (and index-file `(("GIT_INDEX_FILE" . ,index-file))) "write-tree")))
(defun git-commit-tree (buffer tree head)
"Call git-commit-tree with buffer as input and return the resulting commit SHA1."
(git-setup-diff-buffer
(apply #'git-run-command-buffer "*git-diff*" "diff-index" "-p" "-M" "HEAD" "--" (git-get-filenames files)))))
+(defun git-diff-file-merge-head (arg)
+ "Diff the marked file(s) against the first merge head (or the nth one with a numeric prefix)."
+ (interactive "p")
+ (let ((files (git-marked-files))
+ (merge-heads (git-get-merge-heads)))
+ (unless merge-heads (error "No merge in progress"))
+ (git-setup-diff-buffer
+ (apply #'git-run-command-buffer "*git-diff*" "diff-index" "-p" "-M"
+ (or (nth (1- arg) merge-heads) "HEAD") "--" (git-get-filenames files)))))
+
(defun git-diff-unmerged-file (stage)
"Diff the marked unmerged file(s) against the specified stage."
(let ((files (git-marked-files)))
(define-key diff-map "=" 'git-diff-file)
(define-key diff-map "e" 'git-diff-file-idiff)
(define-key diff-map "E" 'git-find-file-imerge)
+ (define-key diff-map "h" 'git-diff-file-merge-head)
(define-key diff-map "m" 'git-diff-file-mine)
(define-key diff-map "o" 'git-diff-file-other)
(setq git-status-mode-map map)))
$GIT_SVN_INDEX $GIT_SVN
$GIT_DIR $REV_DIR/;
$AUTHOR = 'Eric Wong <normalperson@yhbt.net>';
-$VERSION = '0.10.0';
+$VERSION = '0.11.0';
$GIT_DIR = $ENV{GIT_DIR} || "$ENV{PWD}/.git";
# make sure the svn binary gives consistent output between locales and TZs:
$ENV{TZ} = 'UTC';
push @log_args, '--stop-on-copy' unless $_no_stop_copy;
my $svn_log = svn_log_raw(@log_args);
- @$svn_log = sort { $a->{revision} <=> $b->{revision} } @$svn_log;
- my $base = shift @$svn_log or croak "No base revision!\n";
+ my $base = next_log_entry($svn_log) or croak "No base revision!\n";
my $last_commit = undef;
unless (-d $SVN_WC) {
svn_cmd_checkout($SVN_URL,$base->{revision},$SVN_WC);
}
my @svn_up = qw(svn up);
push @svn_up, '--ignore-externals' unless $_no_ignore_ext;
- my $last_rev = $base->{revision};
- foreach my $log_msg (@$svn_log) {
- assert_svn_wc_clean($last_rev, $last_commit);
- $last_rev = $log_msg->{revision};
- sys(@svn_up,"-r$last_rev");
+ my $last = $base;
+ while (my $log_msg = next_log_entry($svn_log)) {
+ assert_svn_wc_clean($last->{revision}, $last_commit);
+ if ($last->{revision} >= $log_msg->{revision}) {
+ croak "Out of order: last >= current: ",
+ "$last->{revision} >= $log_msg->{revision}\n";
+ }
+ sys(@svn_up,"-r$log_msg->{revision}");
$last_commit = git_commit($log_msg, $last_commit, @parents);
+ $last = $log_msg;
}
- assert_svn_wc_clean($last_rev, $last_commit);
+ assert_svn_wc_clean($last->{revision}, $last_commit);
unless (-e "$GIT_DIR/refs/heads/master") {
sys(qw(git-update-ref refs/heads/master),$last_commit);
}
- return pop @$svn_log;
+ return $last;
}
sub commit {
return fetch("$rev_committed=$commit")->{revision};
}
+# read the entire log into a temporary file (which is removed ASAP)
+# and store the file handle + parser state
sub svn_log_raw {
my (@log_args) = @_;
- my $pid = open my $log_fh,'-|';
+ my $log_fh = IO::File->new_tmpfile or croak $!;
+ my $pid = fork;
defined $pid or croak $!;
-
- if ($pid == 0) {
+ if (!$pid) {
+ open STDOUT, '>&', $log_fh or croak $!;
exec (qw(svn log), @log_args) or croak $!
}
+ waitpid $pid, 0;
+ croak if $?;
+ seek $log_fh, 0, 0 or croak $!;
+ return { state => 'sep', fh => $log_fh };
+}
- my @svn_log;
- my $state = 'sep';
- while (<$log_fh>) {
+sub next_log_entry {
+ my $log = shift; # retval of svn_log_raw()
+ my $ret = undef;
+ my $fh = $log->{fh};
+
+ while (<$fh>) {
chomp;
if (/^\-{72}$/) {
- if ($state eq 'msg') {
- if ($svn_log[$#svn_log]->{lines}) {
- $svn_log[$#svn_log]->{msg} .= $_."\n";
- unless(--$svn_log[$#svn_log]->{lines}) {
- $state = 'sep';
+ if ($log->{state} eq 'msg') {
+ if ($ret->{lines}) {
+ $ret->{msg} .= $_."\n";
+ unless(--$ret->{lines}) {
+ $log->{state} = 'sep';
}
} else {
croak "Log parse error at: $_\n",
- $svn_log[$#svn_log]->{revision},
+ $ret->{revision},
"\n";
}
next;
}
- if ($state ne 'sep') {
+ if ($log->{state} ne 'sep') {
croak "Log parse error at: $_\n",
- "state: $state\n",
- $svn_log[$#svn_log]->{revision},
+ "state: $log->{state}\n",
+ $ret->{revision},
"\n";
}
- $state = 'rev';
+ $log->{state} = 'rev';
# if we have an empty log message, put something there:
- if (@svn_log) {
- $svn_log[$#svn_log]->{msg} ||= "\n";
- delete $svn_log[$#svn_log]->{lines};
+ if ($ret) {
+ $ret->{msg} ||= "\n";
+ delete $ret->{lines};
+ return $ret;
}
next;
}
- if ($state eq 'rev' && s/^r(\d+)\s*\|\s*//) {
+ if ($log->{state} eq 'rev' && s/^r(\d+)\s*\|\s*//) {
my $rev = $1;
my ($author, $date, $lines) = split(/\s*\|\s*/, $_, 3);
($lines) = ($lines =~ /(\d+)/);
/(\d{4})\-(\d\d)\-(\d\d)\s
(\d\d)\:(\d\d)\:(\d\d)\s([\-\+]\d+)/x)
or croak "Failed to parse date: $date\n";
- my %log_msg = ( revision => $rev,
+ $ret = { revision => $rev,
date => "$tz $Y-$m-$d $H:$M:$S",
author => $author,
lines => $lines,
- msg => '' );
+ msg => '' };
if (defined $_authors && ! defined $users{$author}) {
die "Author: $author not defined in ",
"$_authors file\n";
}
- push @svn_log, \%log_msg;
- $state = 'msg_start';
+ $log->{state} = 'msg_start';
next;
}
# skip the first blank line of the message:
- if ($state eq 'msg_start' && /^$/) {
- $state = 'msg';
- } elsif ($state eq 'msg') {
- if ($svn_log[$#svn_log]->{lines}) {
- $svn_log[$#svn_log]->{msg} .= $_."\n";
- unless (--$svn_log[$#svn_log]->{lines}) {
- $state = 'sep';
+ if ($log->{state} eq 'msg_start' && /^$/) {
+ $log->{state} = 'msg';
+ } elsif ($log->{state} eq 'msg') {
+ if ($ret->{lines}) {
+ $ret->{msg} .= $_."\n";
+ unless (--$ret->{lines}) {
+ $log->{state} = 'sep';
}
} else {
croak "Log parse error at: $_\n",
- $svn_log[$#svn_log]->{revision},"\n";
+ $ret->{revision},"\n";
}
}
}
- close $log_fh or croak $?;
- return \@svn_log;
+ return $ret;
}
sub svn_info {
}
}
+sub trees_eq {
+ my ($x, $y) = @_;
+ my @x = safe_qx('git-cat-file','commit',$x);
+ my @y = safe_qx('git-cat-file','commit',$y);
+ if (($y[0] ne $x[0]) || $x[0] !~ /^tree $sha1\n$/
+ || $y[0] !~ /^tree $sha1\n$/) {
+ print STDERR "Trees not equal: $y[0] != $x[0]\n";
+ return 0
+ }
+ return 1;
+}
+
sub assert_revision_eq_or_unknown {
my ($revno, $commit) = @_;
if (-f "$REV_DIR/$revno") {
my $current = file_to_s("$REV_DIR/$revno");
- if ($commit ne $current) {
+ if (($commit ne $current) && !trees_eq($commit, $current)) {
croak "$REV_DIR/$revno already exists!\n",
"current: $current\nexpected: $commit\n";
}
Data structures:
-@svn_log = array of log_msg hashes
+$svn_log hashref (as returned by svn_log_raw)
+{
+ fh => file handle of the log file,
+ state => state of the log file parser (sep/msg/rev/msg_start...)
+}
-$log_msg hash
+$log_msg hashref as returned by next_log_entry($svn_log)
{
msg => 'whitespace-formatted log entry
', # trailing newline is preserved
unsigned long *delta_size,
unsigned long max_size)
{
- unsigned int i, outpos, outsize, inscnt, hash_shift;
+ unsigned int i, outpos, outsize, hash_shift;
+ int inscnt;
const unsigned char *ref_data, *ref_top, *data, *top;
unsigned char *out;
struct index *entry, **hash;
unsigned char *op;
if (inscnt) {
+ while (moff && ref_data[moff-1] == data[-1]) {
+ if (msize == 0x10000)
+ break;
+ /* we can match one byte back */
+ msize++;
+ moff--;
+ data--;
+ outpos--;
+ if (--inscnt)
+ continue;
+ outpos--; /* remove count slot */
+ inscnt--; /* make it -1 */
+ break;
+ }
out[outpos - inscnt - 1] = inscnt;
inscnt = 0;
}
int trust_executable_bit = 1;
int assume_unchanged = 0;
int only_use_symrefs = 0;
+int warn_ambiguous_refs = 1;
int repository_format_version = 0;
char git_commit_encoding[MAX_ENCODING_LENGTH] = "utf-8";
int shared_repository = 0;
static int show_root = 0;
static int show_tags = 0;
static int show_unreachable = 0;
-static int standalone = 0;
static int check_full = 0;
static int check_strict = 0;
-static int keep_cache_objects = 0;
+static int keep_cache_objects = 0;
static unsigned char head_sha1[20];
#ifdef NO_D_INO_IN_DIRENT
continue;
if (!obj->parsed) {
- if (!standalone && has_sha1_file(obj->sha1))
+ if (has_sha1_file(obj->sha1))
; /* it is in pack */
else
printf("missing %s %s\n",
for (j = 0; j < refs->count; j++) {
struct object *ref = refs->ref[j];
if (ref->parsed ||
- (!standalone && has_sha1_file(ref->sha1)))
+ (has_sha1_file(ref->sha1)))
continue;
printf("broken link from %7s %s\n",
obj->type, sha1_to_hex(obj->sha1));
obj = lookup_object(sha1);
if (!obj) {
- if (!standalone && has_sha1_file(sha1)) {
+ if (has_sha1_file(sha1)) {
default_refs++;
return 0; /* it is in a pack */
}
keep_cache_objects = 1;
continue;
}
- if (!strcmp(arg, "--standalone")) {
- standalone = 1;
- continue;
- }
if (!strcmp(arg, "--full")) {
check_full = 1;
continue;
continue;
}
if (*arg == '-')
- usage("git-fsck-objects [--tags] [--root] [[--unreachable] [--cache] [--standalone | --full] [--strict] <head-sha1>*]");
+ usage("git-fsck-objects [--tags] [--root] [[--unreachable] [--cache] [--full] [--strict] <head-sha1>*]");
}
- if (standalone && check_full)
- die("Only one of --standalone or --full can be used.");
- if (standalone)
- putenv("GIT_ALTERNATE_OBJECT_DIRECTORIES=");
-
fsck_head_link();
fsck_object_dir(get_object_directory());
if (check_full) {
EOF
while read cmd
do
- sed -n "/NAME/,/git-$cmd/H;
- \$ {x; s/.*git-$cmd - \\(.*\\)/ {\"$cmd\", \"\1\"},/; p}" \
- "Documentation/git-$cmd.txt"
+ sed -n '
+ /NAME/,/git-'"$cmd"'/H
+ ${
+ x
+ s/.*git-'"$cmd"' - \(.*\)/ {"'"$cmd"'", "\1"},/
+ p
+ }' "Documentation/git-$cmd.txt"
done
echo "};"
-r, --rename
Follow renames (Defaults on).
-S, --rev-file revs-file
- use revs from revs-file instead of calling git-rev-list
+ Use revs from revs-file instead of calling git-rev-list
-h, --help
This message.
';
}
'
- if test -n "$verbose"
+ if test -n "$verbose" -a -z "$IS_INITIAL"
then
git-diff-index --cached -M -p --diff-filter=MDTCRA $REFERENCE
fi
my $last_branch = "";
my $orig_branch = "";
my %branch_date;
+my $tip_at_start = undef;
my $git_dir = $ENV{"GIT_DIR"} || ".git";
$git_dir = getwd()."/".$git_dir unless $git_dir =~ m#^/#;
$last_branch = "master";
}
$orig_branch = $last_branch;
+ $tip_at_start = `git-rev-parse --verify HEAD`;
# populate index
system('git-read-tree', $last_branch);
# Now switch back to the branch we were in before all of this happened
if($orig_branch) {
- print "DONE; you may need to merge manually.\n" if $opt_v;
+ print "DONE.\n" if $opt_v;
+ if ($opt_i) {
+ exit 0;
+ }
+ my $tip_at_end = `git-rev-parse --verify HEAD`;
+ if ($tip_at_start ne $tip_at_end) {
+ for ($tip_at_start, $tip_at_end) { chomp; }
+ print "Fetched into the current branch.\n" if $opt_v;
+ system(qw(git-read-tree -u -m),
+ $tip_at_start, $tip_at_end);
+ die "Fast-forward update failed: $?\n" if $?;
+ }
+ else {
+ system(qw(git-merge cvsimport HEAD), "refs/heads/$opt_o");
+ die "Could not merge $opt_o into the current branch.\n" if $?;
+ }
} else {
$orig_branch = "master";
print "DONE; creating $orig_branch branch\n" if $opt_v;
flags="$flags'$cc_or_p' " ;;
esac
-# If we do not have -B nor -C, default to -M.
+# If we do not have -B, -C, -r, nor -p, default to -M.
case " $flags " in
-*" '-"[BCM]* | *" '--find-copies-harder' "*)
+*" '-"[BCMrp]* | *" '--find-copies-harder' "*)
;; # something like -M50.
*)
flags="$flags'-M' " ;;
;;
*)
echo >&2 " not updating."
+ exit 1
;;
esac
}
'
all_strategies='recursive octopus resolve stupid ours'
-default_strategies='recursive'
+default_twohead_strategies='recursive'
+default_octopus_strategies='octopus'
+no_trivial_merge_strategies='ours'
use_strategies=
+
+index_merge=t
if test "@@NO_PYTHON@@"; then
all_strategies='resolve octopus stupid ours'
- default_strategies='resolve'
+ default_twohead_strategies='resolve'
fi
dropsave() {
shift
done
-test "$#" -le 2 && usage ;# we need at least two heads.
-
merge_msg="$1"
shift
head_arg="$1"
shift
# All the rest are remote heads
+test "$#" = 0 && usage ;# we need at least one remote head.
+
remoteheads=
for remote
do
done
set x $remoteheads ; shift
+case "$use_strategies" in
+'')
+ case "$#" in
+ 1)
+ use_strategies="$default_twohead_strategies" ;;
+ *)
+ use_strategies="$default_octopus_strategies" ;;
+ esac
+ ;;
+esac
+
+for s in $use_strategies
+do
+ case " $s " in
+ *" $no_trivial_merge_strategies "*)
+ index_merge=f
+ break
+ ;;
+ esac
+done
+
case "$#" in
1)
common=$(git-merge-base --all $head "$@")
esac
echo "$head" >"$GIT_DIR/ORIG_HEAD"
-case "$#,$common,$no_commit" in
-*,'',*)
+case "$index_merge,$#,$common,$no_commit" in
+f,*)
+ # We've been told not to try anything clever. Skip to real merge.
+ ;;
+?,*,'',*)
# No common ancestors found. We need a real merge.
;;
-1,"$1",*)
+?,1,"$1",*)
# If head can reach all the merge then we are up to date.
- # but first the most common case of merging one remote
+ # but first the most common case of merging one remote.
echo "Already up-to-date."
dropsave
exit 0
;;
-1,"$head",*)
+?,1,"$head",*)
# Again the most common case of merging one remote.
echo "Updating from $head to $1"
git-update-index --refresh 2>/dev/null
dropsave
exit 0
;;
-1,?*"$LF"?*,*)
+?,1,?*"$LF"?*,*)
# We are not doing octopus and not fast forward. Need a
# real merge.
;;
-1,*,)
+?,1,*,)
# We are not doing octopus, not fast forward, and have only
# one common. See if it is really trivial.
git var GIT_COMMITTER_IDENT >/dev/null || exit
# We are going to make a new commit.
git var GIT_COMMITTER_IDENT >/dev/null || exit
-case "$use_strategies" in
-'')
- case "$#" in
- 1)
- use_strategies="$default_strategies" ;;
- *)
- use_strategies=octopus ;;
- esac
- ;;
-esac
-
# At this point, we need a real merge. No matter what strategy
# we use, it would operate on the index, possibly affecting the
# working tree, and when resolved cleanly, have the desired tree
# auto resolved the merge cleanly.
if test '' != "$result_tree"
then
- parents="-p $head"
- for remote
- do
- parents="$parents -p $remote"
- done
+ parents=$(git-show-branch --independent "$head" "$@" | sed -e 's/^/-p /')
result_commit=$(echo "$merge_msg" | git-commit-tree $result_tree $parents) || exit
finish "$result_commit" "Merge $result_commit, made by $wt_strategy."
dropsave
# First update the working tree to match $curr_head.
echo >&2 "Warning: fetch updated the current branch head."
- echo >&2 "Warning: fast forwarding your working tree."
+ echo >&2 "Warning: fast forwarding your working tree from"
+ echo >&2 "Warning: $orig_head commit."
+ git-update-index --refresh 2>/dev/null
git-read-tree -u -m "$orig_head" "$curr_head" ||
- die "You need to first update your working tree."
+ die 'Cannot fast-forward your working tree.
+After making sure that you saved anything precious from
+$ git diff '$orig_head'
+output, run
+$ git reset --hard
+to recover.'
+
fi
merge_head=$(sed -e '/ not-for-merge /d' \
exit 0
;;
?*' '?*)
- var=`git repo-config --get pull.octopus`
- if test '' = "$var"
+ var=`git-repo-config --get pull.octopus`
+ if test -n "$var"
then
- strategy_default_args='-s octopus'
- else
strategy_default_args="-s $var"
fi
;;
*)
- var=`git repo-config --get pull.twohead`
- if test '' = "$var"
- then
- strategy_default_args='-s recursive'
- else
+ var=`git-repo-config --get pull.twohead`
+ if test -n "$var"
+ then
strategy_default_args="-s $var"
fi
;;
foreach my $t (@files) {
open(F,"<",$t) or die "can't open file $t";
+ my $author_not_sender = undef;
@cc = @initial_cc;
my $found_mbox = 0;
my $header_done = 0;
$subject = $1;
} elsif (/^(Cc|From):\s+(.*)$/) {
- next if ($2 eq $from && $suppress_from);
+ if ($2 eq $from) {
+ next if ($suppress_from);
+ }
+ else {
+ $author_not_sender = $2;
+ }
printf("(mbox) Adding cc: %s from line '%s'\n",
$2, $_) unless $quiet;
push @cc, $2;
}
}
close F;
+ if (defined $author_not_sender) {
+ $message = "From: $author_not_sender\n\n$message";
+ }
$cc = join(", ", unique_email_list(@cc));
proc start_rev_list {rlargs} {
global startmsecs nextupdate ncmupdate
- global commfd leftover tclencoding
+ global commfd leftover tclencoding datemode
set startmsecs [clock clicks -milliseconds]
set nextupdate [expr {$startmsecs + 100}]
set ncmupdate 1
+ initlayout
+ set order "--topo-order"
+ if {$datemode} {
+ set order "--date-order"
+ }
if {[catch {
- set commfd [open [concat | git-rev-list --header --topo-order \
+ set commfd [open [concat | git-rev-list --header $order \
--parents $rlargs] r]
} err]} {
puts stderr "Error executing git-rev-list: $err"
}
proc getcommits {rargs} {
- global oldcommits commits phase canv mainfont env
+ global phase canv mainfont
- # check that we can find a .git directory somewhere...
- set gitdir [gitdir]
- if {![file isdirectory $gitdir]} {
- error_popup "Cannot find the git directory \"$gitdir\"."
- exit 1
- }
- set oldcommits {}
- set commits {}
set phase getcommits
start_rev_list [parse_args $rargs]
$canv delete all
}
proc getcommitlines {commfd} {
- global oldcommits commits parents cdate children nchildren
- global commitlisted phase nextupdate
- global stopped redisplaying leftover
- global canv
+ global commitlisted nextupdate
+ global leftover
+ global displayorder commitidx commitrow commitdata
set stuff [read $commfd]
if {$stuff == {}} {
exit 1
}
set start 0
+ set gotsome 0
while 1 {
set i [string first "\0" $stuff $start]
if {$i < 0} {
append leftover [string range $stuff $start end]
- return
+ break
}
- set cmit [string range $stuff $start [expr {$i - 1}]]
if {$start == 0} {
- set cmit "$leftover$cmit"
+ set cmit $leftover
+ append cmit [string range $stuff 0 [expr {$i - 1}]]
set leftover {}
+ } else {
+ set cmit [string range $stuff $start [expr {$i - 1}]]
}
set start [expr {$i + 1}]
set j [string first "\n" $cmit]
set ids [string range $cmit 0 [expr {$j - 1}]]
set ok 1
foreach id $ids {
- if {![regexp {^[0-9a-f]{40}$} $id]} {
+ if {[string length $id] != 40} {
set ok 0
break
}
}
set id [lindex $ids 0]
set olds [lrange $ids 1 end]
- set cmit [string range $cmit [expr {$j + 1}] end]
- lappend commits $id
set commitlisted($id) 1
- parsecommit $id $cmit 1 [lrange $ids 1 end]
- drawcommit $id 1
- if {[clock clicks -milliseconds] >= $nextupdate} {
- doupdate 1
- }
- while {$redisplaying} {
- set redisplaying 0
- if {$stopped == 1} {
- set stopped 0
- set phase "getcommits"
- foreach id $commits {
- drawcommit $id 1
- if {$stopped} break
- if {[clock clicks -milliseconds] >= $nextupdate} {
- doupdate 1
- }
- }
- }
- }
+ updatechildren $id $olds
+ set commitdata($id) [string range $cmit [expr {$j + 1}] end]
+ set commitrow($id) $commitidx
+ incr commitidx
+ lappend displayorder $id
+ set gotsome 1
+ }
+ if {$gotsome} {
+ layoutmore
+ }
+ if {[clock clicks -milliseconds] >= $nextupdate} {
+ doupdate 1
}
}
proc readcommit {id} {
if {[catch {set contents [exec git-cat-file commit $id]}]} return
- parsecommit $id $contents 0 {}
+ updatechildren $id {}
+ parsecommit $id $contents 0
}
proc updatecommits {rargs} {
- global commitlisted commfd phase
- global startmsecs nextupdate ncmupdate
- global idtags idheads idotherrefs
- global leftover
- global parsed_args
- global canv mainfont
- global oldcommits commits
- global parents nchildren children ncleft
-
- set old_args $parsed_args
- parse_args $rargs
-
- if {$phase == "getcommits" || $phase == "incrdraw"} {
- # havent read all the old commits, just start again from scratch
- stopfindproc
- set oldcommits {}
- set commits {}
- foreach v {children nchildren parents commitlisted commitinfo
- selectedline matchinglines treediffs
- mergefilelist currentid rowtextx} {
- global $v
- catch {unset $v}
- }
- readrefs
- if {$phase == "incrdraw"} {
- allcanvs delete all
- $canv create text 3 3 -anchor nw -text "Reading commits..." \
- -font $mainfont -tags textitems
- set phase getcommits
- }
- start_rev_list $parsed_args
- return
- }
-
- foreach id $old_args {
- if {![regexp {^[0-9a-f]{40}$} $id]} continue
- if {[info exists oldref($id)]} continue
- set oldref($id) $id
- lappend ignoreold "^$id"
- }
- foreach id $parsed_args {
- if {![regexp {^[0-9a-f]{40}$} $id]} continue
- if {[info exists ref($id)]} continue
- set ref($id) $id
- lappend ignorenew "^$id"
- }
-
- foreach a $old_args {
- if {![info exists ref($a)]} {
- lappend ignorenew $a
- }
- }
-
- set phase updatecommits
- set oldcommits $commits
- set commits {}
- set removed_commits [split [eval exec git-rev-list $ignorenew] "\n" ]
- if {[llength $removed_commits] > 0} {
- allcanvs delete all
- foreach c $removed_commits {
- set i [lsearch -exact $oldcommits $c]
- if {$i >= 0} {
- set oldcommits [lreplace $oldcommits $i $i]
- unset commitlisted($c)
- foreach p $parents($c) {
- if {[info exists nchildren($p)]} {
- set j [lsearch -exact $children($p) $c]
- if {$j >= 0} {
- set children($p) [lreplace $children($p) $j $j]
- incr nchildren($p) -1
- }
- }
- }
- }
- }
- set phase removecommits
- }
-
- set args {}
- foreach a $parsed_args {
- if {![info exists oldref($a)]} {
- lappend args $a
- }
+ stopfindproc
+ foreach v {children nchildren parents nparents commitlisted
+ colormap selectedline matchinglines treediffs
+ mergefilelist currentid rowtextx commitrow
+ rowidlist rowoffsets idrowranges idrangedrawn iddrawn
+ linesegends crossings cornercrossings} {
+ global $v
+ catch {unset $v}
}
-
+ allcanvs delete all
readrefs
- start_rev_list [concat $ignoreold $args]
+ getcommits $rargs
}
proc updatechildren {id olds} {
- global children nchildren parents nparents ncleft
+ global children nchildren parents nparents
if {![info exists nchildren($id)]} {
set children($id) {}
set nchildren($id) 0
- set ncleft($id) 0
}
set parents($id) $olds
set nparents($id) [llength $olds]
if {![info exists nchildren($p)]} {
set children($p) [list $id]
set nchildren($p) 1
- set ncleft($p) 1
} elseif {[lsearch -exact $children($p) $id] < 0} {
lappend children($p) $id
incr nchildren($p)
- incr ncleft($p)
}
}
}
-proc parsecommit {id contents listed olds} {
+proc parsecommit {id contents listed} {
global commitinfo cdate
set inhdr 1
set audate {}
set comname {}
set comdate {}
- updatechildren $id $olds
set hdrend [string first "\n\n" $contents]
if {$hdrend < 0} {
# should never happen...
$comname $comdate $comment]
}
+proc getcommit {id} {
+ global commitdata commitinfo nparents
+
+ if {[info exists commitdata($id)]} {
+ parsecommit $id $commitdata($id) 1
+ } else {
+ readcommit $id
+ if {![info exists commitinfo($id)]} {
+ set commitinfo($id) {"No commit information available"}
+ set nparents($id) 0
+ }
+ }
+ return 1
+}
+
proc readrefs {} {
global tagids idtags headids idheads tagcontents
global otherrefids idotherrefs
button $w.ok -text OK -command "destroy $w"
pack $w.ok -side bottom -fill x
bind $w <Visibility> "grab $w; focus $w"
+ bind $w <Key-Return> "destroy $w"
tkwait window $w
}
set canv .ctop.top.clist.canv
canvas $canv -height $geometry(canvh) -width $geometry(canv1) \
-bg white -bd 0 \
- -yscrollincr $linespc -yscrollcommand "$cscroll set"
+ -yscrollincr $linespc -yscrollcommand "scrollcanv $cscroll"
.ctop.top.clist add $canv
set canv2 .ctop.top.clist.canv2
canvas $canv2 -height $geometry(canvh) -width $geometry(canv2) \
$rowctxmenu add command -label "Write commit to file" -command writecommit
}
+proc scrollcanv {cscroll f0 f1} {
+ $cscroll set $f0 $f1
+ drawfrac $f0 $f1
+}
+
# when we make a key binding for the toplevel, make sure
# it doesn't get triggered when that key is pressed in the
# find string entry widget.
toplevel $w
wm title $w "About gitk"
message $w.m -text {
-Gitk version 1.2
+Gitk - a commit viewer for git
-Copyright © 2005 Paul Mackerras
+Copyright © 2005-2006 Paul Mackerras
Use and redistribute under the terms of the GNU General Public License} \
-justify center -aspect 400
pack $w.ok -side bottom
}
+proc shortids {ids} {
+ set res {}
+ foreach id $ids {
+ if {[llength $id] > 1} {
+ lappend res [shortids $id]
+ } elseif {[regexp {^[0-9a-f]{40}$} $id]} {
+ lappend res [string range $id 0 7]
+ } else {
+ lappend res $id
+ }
+ }
+ return $res
+}
+
+proc incrange {l x o} {
+ set n [llength $l]
+ while {$x < $n} {
+ set e [lindex $l $x]
+ if {$e ne {}} {
+ lset l $x [expr {$e + $o}]
+ }
+ incr x
+ }
+ return $l
+}
+
+proc ntimes {n o} {
+ set ret {}
+ for {} {$n > 0} {incr n -1} {
+ lappend ret $o
+ }
+ return $ret
+}
+
+proc usedinrange {id l1 l2} {
+ global children commitrow
+
+ if {[info exists commitrow($id)]} {
+ set r $commitrow($id)
+ if {$l1 <= $r && $r <= $l2} {
+ return [expr {$r - $l1 + 1}]
+ }
+ }
+ foreach c $children($id) {
+ if {[info exists commitrow($c)]} {
+ set r $commitrow($c)
+ if {$l1 <= $r && $r <= $l2} {
+ return [expr {$r - $l1 + 1}]
+ }
+ }
+ }
+ return 0
+}
+
+proc sanity {row {full 0}} {
+ global rowidlist rowoffsets
+
+ set col -1
+ set ids [lindex $rowidlist $row]
+ foreach id $ids {
+ incr col
+ if {$id eq {}} continue
+ if {$col < [llength $ids] - 1 &&
+ [lsearch -exact -start [expr {$col+1}] $ids $id] >= 0} {
+ puts "oops: [shortids $id] repeated in row $row col $col: {[shortids [lindex $rowidlist $row]]}"
+ }
+ set o [lindex $rowoffsets $row $col]
+ set y $row
+ set x $col
+ while {$o ne {}} {
+ incr y -1
+ incr x $o
+ if {[lindex $rowidlist $y $x] != $id} {
+ puts "oops: rowoffsets wrong at row [expr {$y+1}] col [expr {$x-$o}]"
+ puts " id=[shortids $id] check started at row $row"
+ for {set i $row} {$i >= $y} {incr i -1} {
+ puts " row $i ids={[shortids [lindex $rowidlist $i]]} offs={[lindex $rowoffsets $i]}"
+ }
+ break
+ }
+ if {!$full} break
+ set o [lindex $rowoffsets $y $x]
+ }
+ }
+}
+
+proc makeuparrow {oid x y z} {
+ global rowidlist rowoffsets uparrowlen idrowranges
+
+ for {set i 1} {$i < $uparrowlen && $y > 1} {incr i} {
+ incr y -1
+ incr x $z
+ set off0 [lindex $rowoffsets $y]
+ for {set x0 $x} {1} {incr x0} {
+ if {$x0 >= [llength $off0]} {
+ set x0 [llength [lindex $rowoffsets [expr {$y-1}]]]
+ break
+ }
+ set z [lindex $off0 $x0]
+ if {$z ne {}} {
+ incr x0 $z
+ break
+ }
+ }
+ set z [expr {$x0 - $x}]
+ lset rowidlist $y [linsert [lindex $rowidlist $y] $x $oid]
+ lset rowoffsets $y [linsert [lindex $rowoffsets $y] $x $z]
+ }
+ set tmp [lreplace [lindex $rowoffsets $y] $x $x {}]
+ lset rowoffsets $y [incrange $tmp [expr {$x+1}] -1]
+ lappend idrowranges($oid) $y
+}
+
+proc initlayout {} {
+ global rowidlist rowoffsets displayorder
+ global rowlaidout rowoptim
+ global idinlist rowchk
+ global commitidx numcommits
+ global nextcolor
+
+ set commitidx 0
+ set numcommits 0
+ set displayorder {}
+ set nextcolor 0
+ set rowidlist {{}}
+ set rowoffsets {{}}
+ catch {unset idinlist}
+ catch {unset rowchk}
+ set rowlaidout 0
+ set rowoptim 0
+}
+
+proc visiblerows {} {
+ global canv numcommits linespc
+
+ set ymax [lindex [$canv cget -scrollregion] 3]
+ if {$ymax eq {} || $ymax == 0} return
+ set f [$canv yview]
+ set y0 [expr {int([lindex $f 0] * $ymax)}]
+ set r0 [expr {int(($y0 - 3) / $linespc) - 1}]
+ if {$r0 < 0} {
+ set r0 0
+ }
+ set y1 [expr {int([lindex $f 1] * $ymax)}]
+ set r1 [expr {int(($y1 - 3) / $linespc) + 1}]
+ if {$r1 >= $numcommits} {
+ set r1 [expr {$numcommits - 1}]
+ }
+ return [list $r0 $r1]
+}
+
+proc layoutmore {} {
+ global rowlaidout rowoptim commitidx numcommits optim_delay
+ global uparrowlen
+
+ set row $rowlaidout
+ set rowlaidout [layoutrows $row $commitidx 0]
+ set orow [expr {$rowlaidout - $uparrowlen - 1}]
+ if {$orow > $rowoptim} {
+ checkcrossings $rowoptim $orow
+ optimize_rows $rowoptim 0 $orow
+ set rowoptim $orow
+ }
+ set canshow [expr {$rowoptim - $optim_delay}]
+ if {$canshow > $numcommits} {
+ showstuff $canshow
+ }
+}
+
+proc showstuff {canshow} {
+ global numcommits
+ global canvy0 linespc
+ global linesegends idrowranges idrangedrawn
+
+ if {$numcommits == 0} {
+ global phase
+ set phase "incrdraw"
+ allcanvs delete all
+ }
+ set row $numcommits
+ set numcommits $canshow
+ allcanvs conf -scrollregion \
+ [list 0 0 0 [expr {$canvy0 + ($numcommits - 0.5) * $linespc + 2}]]
+ set rows [visiblerows]
+ set r0 [lindex $rows 0]
+ set r1 [lindex $rows 1]
+ for {set r $row} {$r < $canshow} {incr r} {
+ if {[info exists linesegends($r)]} {
+ foreach id $linesegends($r) {
+ set i -1
+ foreach {s e} $idrowranges($id) {
+ incr i
+ if {$e ne {} && $e < $numcommits && $s <= $r1 && $e >= $r0
+ && ![info exists idrangedrawn($id,$i)]} {
+ drawlineseg $id $i
+ set idrangedrawn($id,$i) 1
+ }
+ }
+ }
+ }
+ }
+ if {$canshow > $r1} {
+ set canshow $r1
+ }
+ while {$row < $canshow} {
+ drawcmitrow $row
+ incr row
+ }
+}
+
+proc layoutrows {row endrow last} {
+ global rowidlist rowoffsets displayorder
+ global uparrowlen downarrowlen maxwidth mingaplen
+ global nchildren parents nparents
+ global idrowranges linesegends
+ global commitidx
+ global idinlist rowchk
+
+ set idlist [lindex $rowidlist $row]
+ set offs [lindex $rowoffsets $row]
+ while {$row < $endrow} {
+ set id [lindex $displayorder $row]
+ set oldolds {}
+ set newolds {}
+ foreach p $parents($id) {
+ if {![info exists idinlist($p)]} {
+ lappend newolds $p
+ } elseif {!$idinlist($p)} {
+ lappend oldolds $p
+ }
+ }
+ set nev [expr {[llength $idlist] + [llength $newolds]
+ + [llength $oldolds] - $maxwidth + 1}]
+ if {$nev > 0} {
+ if {!$last && $row + $uparrowlen + $mingaplen >= $commitidx} break
+ for {set x [llength $idlist]} {[incr x -1] >= 0} {} {
+ set i [lindex $idlist $x]
+ if {![info exists rowchk($i)] || $row >= $rowchk($i)} {
+ set r [usedinrange $i [expr {$row - $downarrowlen}] \
+ [expr {$row + $uparrowlen + $mingaplen}]]
+ if {$r == 0} {
+ set idlist [lreplace $idlist $x $x]
+ set offs [lreplace $offs $x $x]
+ set offs [incrange $offs $x 1]
+ set idinlist($i) 0
+ set rm1 [expr {$row - 1}]
+ lappend linesegends($rm1) $i
+ lappend idrowranges($i) $rm1
+ if {[incr nev -1] <= 0} break
+ continue
+ }
+ set rowchk($id) [expr {$row + $r}]
+ }
+ }
+ lset rowidlist $row $idlist
+ lset rowoffsets $row $offs
+ }
+ set col [lsearch -exact $idlist $id]
+ if {$col < 0} {
+ set col [llength $idlist]
+ lappend idlist $id
+ lset rowidlist $row $idlist
+ set z {}
+ if {$nchildren($id) > 0} {
+ set z [expr {[llength [lindex $rowidlist [expr {$row-1}]]] - $col}]
+ unset idinlist($id)
+ }
+ lappend offs $z
+ lset rowoffsets $row $offs
+ if {$z ne {}} {
+ makeuparrow $id $col $row $z
+ }
+ } else {
+ unset idinlist($id)
+ }
+ if {[info exists idrowranges($id)]} {
+ lappend idrowranges($id) $row
+ }
+ incr row
+ set offs [ntimes [llength $idlist] 0]
+ set l [llength $newolds]
+ set idlist [eval lreplace \$idlist $col $col $newolds]
+ set o 0
+ if {$l != 1} {
+ set offs [lrange $offs 0 [expr {$col - 1}]]
+ foreach x $newolds {
+ lappend offs {}
+ incr o -1
+ }
+ incr o
+ set tmp [expr {[llength $idlist] - [llength $offs]}]
+ if {$tmp > 0} {
+ set offs [concat $offs [ntimes $tmp $o]]
+ }
+ } else {
+ lset offs $col {}
+ }
+ foreach i $newolds {
+ set idinlist($i) 1
+ set idrowranges($i) $row
+ }
+ incr col $l
+ foreach oid $oldolds {
+ set idinlist($oid) 1
+ set idlist [linsert $idlist $col $oid]
+ set offs [linsert $offs $col $o]
+ makeuparrow $oid $col $row $o
+ incr col
+ }
+ lappend rowidlist $idlist
+ lappend rowoffsets $offs
+ }
+ return $row
+}
+
+proc addextraid {id row} {
+ global displayorder commitrow commitinfo nparents
+ global commitidx
+
+ incr commitidx
+ lappend displayorder $id
+ set commitrow($id) $row
+ readcommit $id
+ if {![info exists commitinfo($id)]} {
+ set commitinfo($id) {"No commit information available"}
+ set nparents($id) 0
+ }
+}
+
+proc layouttail {} {
+ global rowidlist rowoffsets idinlist commitidx
+ global idrowranges
+
+ set row $commitidx
+ set idlist [lindex $rowidlist $row]
+ while {$idlist ne {}} {
+ set col [expr {[llength $idlist] - 1}]
+ set id [lindex $idlist $col]
+ addextraid $id $row
+ unset idinlist($id)
+ lappend idrowranges($id) $row
+ incr row
+ set offs [ntimes $col 0]
+ set idlist [lreplace $idlist $col $col]
+ lappend rowidlist $idlist
+ lappend rowoffsets $offs
+ }
+
+ foreach id [array names idinlist] {
+ addextraid $id $row
+ lset rowidlist $row [list $id]
+ lset rowoffsets $row 0
+ makeuparrow $id 0 $row 0
+ lappend idrowranges($id) $row
+ incr row
+ lappend rowidlist {}
+ lappend rowoffsets {}
+ }
+}
+
+proc insert_pad {row col npad} {
+ global rowidlist rowoffsets
+
+ set pad [ntimes $npad {}]
+ lset rowidlist $row [eval linsert [list [lindex $rowidlist $row]] $col $pad]
+ set tmp [eval linsert [list [lindex $rowoffsets $row]] $col $pad]
+ lset rowoffsets $row [incrange $tmp [expr {$col + $npad}] [expr {-$npad}]]
+}
+
+proc optimize_rows {row col endrow} {
+ global rowidlist rowoffsets idrowranges linesegends displayorder
+
+ for {} {$row < $endrow} {incr row} {
+ set idlist [lindex $rowidlist $row]
+ set offs [lindex $rowoffsets $row]
+ set haspad 0
+ set downarrowcols {}
+ if {[info exists linesegends($row)]} {
+ set downarrowcols $linesegends($row)
+ if {$col > 0} {
+ while {$downarrowcols ne {}} {
+ set i [lsearch -exact $idlist [lindex $downarrowcols 0]]
+ if {$i < 0 || $i >= $col} break
+ set downarrowcols [lrange $downarrowcols 1 end]
+ }
+ }
+ }
+ for {} {$col < [llength $offs]} {incr col} {
+ if {[lindex $idlist $col] eq {}} {
+ set haspad 1
+ continue
+ }
+ set z [lindex $offs $col]
+ if {$z eq {}} continue
+ set isarrow 0
+ set x0 [expr {$col + $z}]
+ set y0 [expr {$row - 1}]
+ set z0 [lindex $rowoffsets $y0 $x0]
+ if {$z0 eq {}} {
+ set id [lindex $idlist $col]
+ if {[info exists idrowranges($id)] &&
+ $y0 > [lindex $idrowranges($id) 0]} {
+ set isarrow 1
+ }
+ } elseif {$downarrowcols ne {} &&
+ [lindex $idlist $col] eq [lindex $downarrowcols 0]} {
+ set downarrowcols [lrange $downarrowcols 1 end]
+ set isarrow 1
+ }
+ if {$z < -1 || ($z < 0 && $isarrow)} {
+ set npad [expr {-1 - $z + $isarrow}]
+ set offs [incrange $offs $col $npad]
+ insert_pad $y0 $x0 $npad
+ if {$y0 > 0} {
+ optimize_rows $y0 $x0 $row
+ }
+ set z [lindex $offs $col]
+ set x0 [expr {$col + $z}]
+ set z0 [lindex $rowoffsets $y0 $x0]
+ } elseif {$z > 1 || ($z > 0 && $isarrow)} {
+ set npad [expr {$z - 1 + $isarrow}]
+ set y1 [expr {$row + 1}]
+ set offs2 [lindex $rowoffsets $y1]
+ set x1 -1
+ foreach z $offs2 {
+ incr x1
+ if {$z eq {} || $x1 + $z < $col} continue
+ if {$x1 + $z > $col} {
+ incr npad
+ }
+ lset rowoffsets $y1 [incrange $offs2 $x1 $npad]
+ break
+ }
+ set pad [ntimes $npad {}]
+ set idlist [eval linsert \$idlist $col $pad]
+ set tmp [eval linsert \$offs $col $pad]
+ incr col $npad
+ set offs [incrange $tmp $col [expr {-$npad}]]
+ set z [lindex $offs $col]
+ set haspad 1
+ }
+ if {$z0 eq {} && !$isarrow} {
+ # this line links to its first child on row $row-2
+ set rm2 [expr {$row - 2}]
+ set id [lindex $displayorder $rm2]
+ set xc [lsearch -exact [lindex $rowidlist $rm2] $id]
+ if {$xc >= 0} {
+ set z0 [expr {$xc - $x0}]
+ }
+ }
+ if {$z0 ne {} && $z < 0 && $z0 > 0} {
+ insert_pad $y0 $x0 1
+ set offs [incrange $offs $col 1]
+ optimize_rows $y0 [expr {$x0 + 1}] $row
+ }
+ }
+ if {!$haspad} {
+ set o {}
+ for {set col [llength $idlist]} {[incr col -1] >= 0} {} {
+ set o [lindex $offs $col]
+ if {$o eq {}} {
+ # check if this is the link to the first child
+ set id [lindex $idlist $col]
+ if {[info exists idrowranges($id)] &&
+ $row == [lindex $idrowranges($id) 0]} {
+ # it is, work out offset to child
+ set y0 [expr {$row - 1}]
+ set id [lindex $displayorder $y0]
+ set x0 [lsearch -exact [lindex $rowidlist $y0] $id]
+ if {$x0 >= 0} {
+ set o [expr {$x0 - $col}]
+ }
+ }
+ }
+ if {$o eq {} || $o <= 0} break
+ }
+ if {$o ne {} && [incr col] < [llength $idlist]} {
+ set y1 [expr {$row + 1}]
+ set offs2 [lindex $rowoffsets $y1]
+ set x1 -1
+ foreach z $offs2 {
+ incr x1
+ if {$z eq {} || $x1 + $z < $col} continue
+ lset rowoffsets $y1 [incrange $offs2 $x1 1]
+ break
+ }
+ set idlist [linsert $idlist $col {}]
+ set tmp [linsert $offs $col {}]
+ incr col
+ set offs [incrange $tmp $col -1]
+ }
+ }
+ lset rowidlist $row $idlist
+ lset rowoffsets $row $offs
+ set col 0
+ }
+}
+
+proc xc {row col} {
+ global canvx0 linespc
+ return [expr {$canvx0 + $col * $linespc}]
+}
+
+proc yc {row} {
+ global canvy0 linespc
+ return [expr {$canvy0 + $row * $linespc}]
+}
+
+proc linewidth {id} {
+ global thickerline lthickness
+
+ set wid $lthickness
+ if {[info exists thickerline] && $id eq $thickerline} {
+ set wid [expr {2 * $lthickness}]
+ }
+ return $wid
+}
+
+proc drawlineseg {id i} {
+ global rowoffsets rowidlist idrowranges
+ global displayorder
+ global canv colormap
+
+ set startrow [lindex $idrowranges($id) [expr {2 * $i}]]
+ set row [lindex $idrowranges($id) [expr {2 * $i + 1}]]
+ if {$startrow == $row} return
+ assigncolor $id
+ set coords {}
+ set col [lsearch -exact [lindex $rowidlist $row] $id]
+ if {$col < 0} {
+ puts "oops: drawline: id $id not on row $row"
+ return
+ }
+ set lasto {}
+ set ns 0
+ while {1} {
+ set o [lindex $rowoffsets $row $col]
+ if {$o eq {}} break
+ if {$o ne $lasto} {
+ # changing direction
+ set x [xc $row $col]
+ set y [yc $row]
+ lappend coords $x $y
+ set lasto $o
+ }
+ incr col $o
+ incr row -1
+ }
+ set x [xc $row $col]
+ set y [yc $row]
+ lappend coords $x $y
+ if {$i == 0} {
+ # draw the link to the first child as part of this line
+ incr row -1
+ set child [lindex $displayorder $row]
+ set ccol [lsearch -exact [lindex $rowidlist $row] $child]
+ if {$ccol >= 0} {
+ set x [xc $row $ccol]
+ set y [yc $row]
+ if {$ccol < $col - 1} {
+ lappend coords [xc $row [expr {$col - 1}]] [yc $row]
+ } elseif {$ccol > $col + 1} {
+ lappend coords [xc $row [expr {$col + 1}]] [yc $row]
+ }
+ lappend coords $x $y
+ }
+ }
+ if {[llength $coords] < 4} return
+ set last [expr {[llength $idrowranges($id)] / 2 - 1}]
+ set arrow [expr {2 * ($i > 0) + ($i < $last)}]
+ set arrow [lindex {none first last both} $arrow]
+ set t [$canv create line $coords -width [linewidth $id] \
+ -fill $colormap($id) -tags lines.$id -arrow $arrow]
+ $canv lower $t
+ bindline $t $id
+}
+
+proc drawparentlinks {id row col olds} {
+ global rowidlist canv colormap idrowranges
+
+ set row2 [expr {$row + 1}]
+ set x [xc $row $col]
+ set y [yc $row]
+ set y2 [yc $row2]
+ set ids [lindex $rowidlist $row2]
+ # rmx = right-most X coord used
+ set rmx 0
+ foreach p $olds {
+ if {[info exists idrowranges($p)] &&
+ $row2 == [lindex $idrowranges($p) 0] &&
+ $row2 < [lindex $idrowranges($p) 1]} {
+ # drawlineseg will do this one for us
+ continue
+ }
+ set i [lsearch -exact $ids $p]
+ if {$i < 0} {
+ puts "oops, parent $p of $id not in list"
+ continue
+ }
+ assigncolor $p
+ # should handle duplicated parents here...
+ set coords [list $x $y]
+ if {$i < $col - 1} {
+ lappend coords [xc $row [expr {$i + 1}]] $y
+ } elseif {$i > $col + 1} {
+ lappend coords [xc $row [expr {$i - 1}]] $y
+ }
+ set x2 [xc $row2 $i]
+ if {$x2 > $rmx} {
+ set rmx $x2
+ }
+ lappend coords $x2 $y2
+ set t [$canv create line $coords -width [linewidth $p] \
+ -fill $colormap($p) -tags lines.$p]
+ $canv lower $t
+ bindline $t $p
+ }
+ return $rmx
+}
+
+proc drawlines {id} {
+ global colormap canv
+ global idrowranges idrangedrawn
+ global children iddrawn commitrow rowidlist
+
+ $canv delete lines.$id
+ set nr [expr {[llength $idrowranges($id)] / 2}]
+ for {set i 0} {$i < $nr} {incr i} {
+ if {[info exists idrangedrawn($id,$i)]} {
+ drawlineseg $id $i
+ }
+ }
+ if {[info exists children($id)]} {
+ foreach child $children($id) {
+ if {[info exists iddrawn($child)]} {
+ set row $commitrow($child)
+ set col [lsearch -exact [lindex $rowidlist $row] $child]
+ if {$col >= 0} {
+ drawparentlinks $child $row $col [list $id]
+ }
+ }
+ }
+ }
+}
+
+proc drawcmittext {id row col rmx} {
+ global linespc canv canv2 canv3 canvy0
+ global commitlisted commitinfo rowidlist
+ global rowtextx idpos idtags idheads idotherrefs
+ global linehtag linentag linedtag
+ global mainfont namefont
+
+ set ofill [expr {[info exists commitlisted($id)]? "blue": "white"}]
+ set x [xc $row $col]
+ set y [yc $row]
+ set orad [expr {$linespc / 3}]
+ set t [$canv create oval [expr {$x - $orad}] [expr {$y - $orad}] \
+ [expr {$x + $orad - 1}] [expr {$y + $orad - 1}] \
+ -fill $ofill -outline black -width 1]
+ $canv raise $t
+ $canv bind $t <1> {selcanvline {} %x %y}
+ set xt [xc $row [llength [lindex $rowidlist $row]]]
+ if {$xt < $rmx} {
+ set xt $rmx
+ }
+ set rowtextx($row) $xt
+ set idpos($id) [list $x $xt $y]
+ if {[info exists idtags($id)] || [info exists idheads($id)]
+ || [info exists idotherrefs($id)]} {
+ set xt [drawtags $id $x $xt $y]
+ }
+ set headline [lindex $commitinfo($id) 0]
+ set name [lindex $commitinfo($id) 1]
+ set date [lindex $commitinfo($id) 2]
+ set date [formatdate $date]
+ set linehtag($row) [$canv create text $xt $y -anchor w \
+ -text $headline -font $mainfont ]
+ $canv bind $linehtag($row) <Button-3> "rowmenu %X %Y $id"
+ set linentag($row) [$canv2 create text 3 $y -anchor w \
+ -text $name -font $namefont]
+ set linedtag($row) [$canv3 create text 3 $y -anchor w \
+ -text $date -font $mainfont]
+}
+
+proc drawcmitrow {row} {
+ global displayorder rowidlist
+ global idrowranges idrangedrawn iddrawn
+ global commitinfo commitlisted parents numcommits
+
+ if {$row >= $numcommits} return
+ foreach id [lindex $rowidlist $row] {
+ if {![info exists idrowranges($id)]} continue
+ set i -1
+ foreach {s e} $idrowranges($id) {
+ incr i
+ if {$row < $s} continue
+ if {$e eq {}} break
+ if {$row <= $e} {
+ if {$e < $numcommits && ![info exists idrangedrawn($id,$i)]} {
+ drawlineseg $id $i
+ set idrangedrawn($id,$i) 1
+ }
+ break
+ }
+ }
+ }
+
+ set id [lindex $displayorder $row]
+ if {[info exists iddrawn($id)]} return
+ set col [lsearch -exact [lindex $rowidlist $row] $id]
+ if {$col < 0} {
+ puts "oops, row $row id $id not in list"
+ return
+ }
+ if {![info exists commitinfo($id)]} {
+ getcommit $id
+ }
+ assigncolor $id
+ if {[info exists commitlisted($id)] && [info exists parents($id)]
+ && $parents($id) ne {}} {
+ set rmx [drawparentlinks $id $row $col $parents($id)]
+ } else {
+ set rmx 0
+ }
+ drawcmittext $id $row $col $rmx
+ set iddrawn($id) 1
+}
+
+proc drawfrac {f0 f1} {
+ global numcommits canv
+ global linespc
+
+ set ymax [lindex [$canv cget -scrollregion] 3]
+ if {$ymax eq {} || $ymax == 0} return
+ set y0 [expr {int($f0 * $ymax)}]
+ set row [expr {int(($y0 - 3) / $linespc) - 1}]
+ if {$row < 0} {
+ set row 0
+ }
+ set y1 [expr {int($f1 * $ymax)}]
+ set endrow [expr {int(($y1 - 3) / $linespc) + 1}]
+ if {$endrow >= $numcommits} {
+ set endrow [expr {$numcommits - 1}]
+ }
+ for {} {$row <= $endrow} {incr row} {
+ drawcmitrow $row
+ }
+}
+
+proc drawvisible {} {
+ global canv
+ eval drawfrac [$canv yview]
+}
+
+proc clear_display {} {
+ global iddrawn idrangedrawn
+
+ allcanvs delete all
+ catch {unset iddrawn}
+ catch {unset idrangedrawn}
+}
+
proc assigncolor {id} {
- global colormap commcolors colors nextcolor
+ global colormap colors nextcolor
global parents nparents children nchildren
global cornercrossings crossings
if {[info exists colormap($id)]} return
set ncolors [llength $colors]
- if {$nparents($id) <= 1 && $nchildren($id) == 1} {
+ if {$nchildren($id) == 1} {
set child [lindex $children($id) 0]
if {[info exists colormap($child)]
&& $nparents($child) == 1} {
set colormap($id) $c
}
-proc initgraph {} {
- global canvy canvy0 lineno numcommits nextcolor linespc
- global nchildren ncleft
- global displist nhyperspace
-
- allcanvs delete all
- set nextcolor 0
- set canvy $canvy0
- set lineno -1
- set numcommits 0
- foreach v {mainline mainlinearrow sidelines colormap cornercrossings
- crossings idline lineid} {
- global $v
- catch {unset $v}
- }
- foreach id [array names nchildren] {
- set ncleft($id) $nchildren($id)
- }
- set displist {}
- set nhyperspace 0
-}
-
proc bindline {t id} {
global canv
$canv bind $t <Button-1> "lineclick %x %y $id 1"
}
-proc drawlines {id xtra delold} {
- global mainline mainlinearrow sidelines lthickness colormap canv
-
- if {$delold} {
- $canv delete lines.$id
- }
- if {[info exists mainline($id)]} {
- set t [$canv create line $mainline($id) \
- -width [expr {($xtra + 1) * $lthickness}] \
- -fill $colormap($id) -tags lines.$id \
- -arrow $mainlinearrow($id)]
- $canv lower $t
- bindline $t $id
- }
- if {[info exists sidelines($id)]} {
- foreach ls $sidelines($id) {
- set coords [lindex $ls 0]
- set thick [lindex $ls 1]
- set arrow [lindex $ls 2]
- set t [$canv create line $coords -fill $colormap($id) \
- -width [expr {($thick + $xtra) * $lthickness}] \
- -arrow $arrow -tags lines.$id]
- $canv lower $t
- bindline $t $id
- }
- }
-}
-
-# level here is an index in displist
-proc drawcommitline {level} {
- global parents children nparents displist
- global canv canv2 canv3 mainfont namefont canvy linespc
- global lineid linehtag linentag linedtag commitinfo
- global colormap numcommits currentparents dupparents
- global idtags idline idheads idotherrefs
- global lineno lthickness mainline mainlinearrow sidelines
- global commitlisted rowtextx idpos lastuse displist
- global oldnlines olddlevel olddisplist
-
- incr numcommits
- incr lineno
- set id [lindex $displist $level]
- set lastuse($id) $lineno
- set lineid($lineno) $id
- set idline($id) $lineno
- set ofill [expr {[info exists commitlisted($id)]? "blue": "white"}]
- if {![info exists commitinfo($id)]} {
- readcommit $id
- if {![info exists commitinfo($id)]} {
- set commitinfo($id) {"No commit information available"}
- set nparents($id) 0
- }
- }
- assigncolor $id
- set currentparents {}
- set dupparents {}
- if {[info exists commitlisted($id)] && [info exists parents($id)]} {
- foreach p $parents($id) {
- if {[lsearch -exact $currentparents $p] < 0} {
- lappend currentparents $p
- } else {
- # remember that this parent was listed twice
- lappend dupparents $p
- }
- }
- }
- set x [xcoord $level $level $lineno]
- set y1 $canvy
- set canvy [expr {$canvy + $linespc}]
- allcanvs conf -scrollregion \
- [list 0 0 0 [expr {$y1 + 0.5 * $linespc + 2}]]
- if {[info exists mainline($id)]} {
- lappend mainline($id) $x $y1
- if {$mainlinearrow($id) ne "none"} {
- set mainline($id) [trimdiagstart $mainline($id)]
- }
- }
- drawlines $id 0 0
- set orad [expr {$linespc / 3}]
- set t [$canv create oval [expr {$x - $orad}] [expr {$y1 - $orad}] \
- [expr {$x + $orad - 1}] [expr {$y1 + $orad - 1}] \
- -fill $ofill -outline black -width 1]
- $canv raise $t
- $canv bind $t <1> {selcanvline {} %x %y}
- set xt [xcoord [llength $displist] $level $lineno]
- if {[llength $currentparents] > 2} {
- set xt [expr {$xt + ([llength $currentparents] - 2) * $linespc}]
- }
- set rowtextx($lineno) $xt
- set idpos($id) [list $x $xt $y1]
- if {[info exists idtags($id)] || [info exists idheads($id)]
- || [info exists idotherrefs($id)]} {
- set xt [drawtags $id $x $xt $y1]
- }
- set headline [lindex $commitinfo($id) 0]
- set name [lindex $commitinfo($id) 1]
- set date [lindex $commitinfo($id) 2]
- set date [formatdate $date]
- set linehtag($lineno) [$canv create text $xt $y1 -anchor w \
- -text $headline -font $mainfont ]
- $canv bind $linehtag($lineno) <Button-3> "rowmenu %X %Y $id"
- set linentag($lineno) [$canv2 create text 3 $y1 -anchor w \
- -text $name -font $namefont]
- set linedtag($lineno) [$canv3 create text 3 $y1 -anchor w \
- -text $date -font $mainfont]
-
- set olddlevel $level
- set olddisplist $displist
- set oldnlines [llength $displist]
-}
-
proc drawtags {id x xt y1} {
global idtags idheads idotherrefs
global linespc lthickness
- global canv mainfont idline rowtextx
+ global canv mainfont commitrow rowtextx
set marks {}
set ntags 0
$xr $yt $xr $yb $xl $yb $x [expr {$yb - $delta}] \
-width 1 -outline black -fill yellow -tags tag.$id]
$canv bind $t <1> [list showtag $tag 1]
- set rowtextx($idline($id)) [expr {$xr + $linespc}]
+ set rowtextx($commitrow($id)) [expr {$xr + $linespc}]
} else {
# draw a head or other ref
if {[incr nheads -1] >= 0} {
return $xt
}
-proc notecrossings {id lo hi corner} {
- global olddisplist crossings cornercrossings
+proc checkcrossings {row endrow} {
+ global displayorder parents rowidlist
+
+ for {} {$row < $endrow} {incr row} {
+ set id [lindex $displayorder $row]
+ set i [lsearch -exact [lindex $rowidlist $row] $id]
+ if {$i < 0} continue
+ set idlist [lindex $rowidlist [expr {$row+1}]]
+ foreach p $parents($id) {
+ set j [lsearch -exact $idlist $p]
+ if {$j > 0} {
+ if {$j < $i - 1} {
+ notecrossings $row $p $j $i [expr {$j+1}]
+ } elseif {$j > $i + 1} {
+ notecrossings $row $p $i $j [expr {$j-1}]
+ }
+ }
+ }
+ }
+}
+
+proc notecrossings {row id lo hi corner} {
+ global rowidlist crossings cornercrossings
for {set i $lo} {[incr i] < $hi} {} {
- set p [lindex $olddisplist $i]
+ set p [lindex [lindex $rowidlist $row] $i]
if {$p == {}} continue
if {$i == $corner} {
if {![info exists cornercrossings($id)]
return $x
}
-# it seems Tk can't draw arrows on the end of diagonal line segments...
-proc trimdiagend {line} {
- while {[llength $line] > 4} {
- set x1 [lindex $line end-3]
- set y1 [lindex $line end-2]
- set x2 [lindex $line end-1]
- set y2 [lindex $line end]
- if {($x1 == $x2) != ($y1 == $y2)} break
- set line [lreplace $line end-1 end]
- }
- return $line
-}
-
-proc trimdiagstart {line} {
- while {[llength $line] > 4} {
- set x1 [lindex $line 0]
- set y1 [lindex $line 1]
- set x2 [lindex $line 2]
- set y2 [lindex $line 3]
- if {($x1 == $x2) != ($y1 == $y2)} break
- set line [lreplace $line 0 1]
- }
- return $line
-}
-
-proc drawslants {id needonscreen nohs} {
- global canv mainline mainlinearrow sidelines
- global canvx0 canvy xspc1 xspc2 lthickness
- global currentparents dupparents
- global lthickness linespc canvy colormap lineno geometry
- global maxgraphpct maxwidth
- global displist onscreen lastuse
- global parents commitlisted
- global oldnlines olddlevel olddisplist
- global nhyperspace numcommits nnewparents
-
- if {$lineno < 0} {
- lappend displist $id
- set onscreen($id) 1
- return 0
- }
-
- set y1 [expr {$canvy - $linespc}]
- set y2 $canvy
-
- # work out what we need to get back on screen
- set reins {}
- if {$onscreen($id) < 0} {
- # next to do isn't displayed, better get it on screen...
- lappend reins [list $id 0]
- }
- # make sure all the previous commits's parents are on the screen
- foreach p $currentparents {
- if {$onscreen($p) < 0} {
- lappend reins [list $p 0]
- }
- }
- # bring back anything requested by caller
- if {$needonscreen ne {}} {
- lappend reins $needonscreen
- }
-
- # try the shortcut
- if {$currentparents == $id && $onscreen($id) == 0 && $reins eq {}} {
- set dlevel $olddlevel
- set x [xcoord $dlevel $dlevel $lineno]
- set mainline($id) [list $x $y1]
- set mainlinearrow($id) none
- set lastuse($id) $lineno
- set displist [lreplace $displist $dlevel $dlevel $id]
- set onscreen($id) 1
- set xspc1([expr {$lineno + 1}]) $xspc1($lineno)
- return $dlevel
- }
-
- # update displist
- set displist [lreplace $displist $olddlevel $olddlevel]
- set j $olddlevel
- foreach p $currentparents {
- set lastuse($p) $lineno
- if {$onscreen($p) == 0} {
- set displist [linsert $displist $j $p]
- set onscreen($p) 1
- incr j
- }
- }
- if {$onscreen($id) == 0} {
- lappend displist $id
- set onscreen($id) 1
- }
-
- # remove the null entry if present
- set nullentry [lsearch -exact $displist {}]
- if {$nullentry >= 0} {
- set displist [lreplace $displist $nullentry $nullentry]
- }
-
- # bring back the ones we need now (if we did it earlier
- # it would change displist and invalidate olddlevel)
- foreach pi $reins {
- # test again in case of duplicates in reins
- set p [lindex $pi 0]
- if {$onscreen($p) < 0} {
- set onscreen($p) 1
- set lastuse($p) $lineno
- set displist [linsert $displist [lindex $pi 1] $p]
- incr nhyperspace -1
- }
- }
-
- set lastuse($id) $lineno
-
- # see if we need to make any lines jump off into hyperspace
- set displ [llength $displist]
- if {$displ > $maxwidth} {
- set ages {}
- foreach x $displist {
- lappend ages [list $lastuse($x) $x]
- }
- set ages [lsort -integer -index 0 $ages]
- set k 0
- while {$displ > $maxwidth} {
- set use [lindex $ages $k 0]
- set victim [lindex $ages $k 1]
- if {$use >= $lineno - 5} break
- incr k
- if {[lsearch -exact $nohs $victim] >= 0} continue
- set i [lsearch -exact $displist $victim]
- set displist [lreplace $displist $i $i]
- set onscreen($victim) -1
- incr nhyperspace
- incr displ -1
- if {$i < $nullentry} {
- incr nullentry -1
- }
- set x [lindex $mainline($victim) end-1]
- lappend mainline($victim) $x $y1
- set line [trimdiagend $mainline($victim)]
- set arrow "last"
- if {$mainlinearrow($victim) ne "none"} {
- set line [trimdiagstart $line]
- set arrow "both"
- }
- lappend sidelines($victim) [list $line 1 $arrow]
- unset mainline($victim)
- }
- }
-
- set dlevel [lsearch -exact $displist $id]
-
- # If we are reducing, put in a null entry
- if {$displ < $oldnlines} {
- # does the next line look like a merge?
- # i.e. does it have > 1 new parent?
- if {$nnewparents($id) > 1} {
- set i [expr {$dlevel + 1}]
- } elseif {$nnewparents([lindex $olddisplist $olddlevel]) == 0} {
- set i $olddlevel
- if {$nullentry >= 0 && $nullentry < $i} {
- incr i -1
- }
- } elseif {$nullentry >= 0} {
- set i $nullentry
- while {$i < $displ
- && [lindex $olddisplist $i] == [lindex $displist $i]} {
- incr i
- }
- } else {
- set i $olddlevel
- if {$dlevel >= $i} {
- incr i
- }
- }
- if {$i < $displ} {
- set displist [linsert $displist $i {}]
- incr displ
- if {$dlevel >= $i} {
- incr dlevel
- }
- }
- }
-
- # decide on the line spacing for the next line
- set lj [expr {$lineno + 1}]
- set maxw [expr {$maxgraphpct * $geometry(canv1) / 100}]
- if {$displ <= 1 || $canvx0 + $displ * $xspc2 <= $maxw} {
- set xspc1($lj) $xspc2
- } else {
- set xspc1($lj) [expr {($maxw - $canvx0 - $xspc2) / ($displ - 1)}]
- if {$xspc1($lj) < $lthickness} {
- set xspc1($lj) $lthickness
- }
- }
-
- foreach idi $reins {
- set id [lindex $idi 0]
- set j [lsearch -exact $displist $id]
- set xj [xcoord $j $dlevel $lj]
- set mainline($id) [list $xj $y2]
- set mainlinearrow($id) first
- }
-
- set i -1
- foreach id $olddisplist {
- incr i
- if {$id == {}} continue
- if {$onscreen($id) <= 0} continue
- set xi [xcoord $i $olddlevel $lineno]
- if {$i == $olddlevel} {
- foreach p $currentparents {
- set j [lsearch -exact $displist $p]
- set coords [list $xi $y1]
- set xj [xcoord $j $dlevel $lj]
- if {$xj < $xi - $linespc} {
- lappend coords [expr {$xj + $linespc}] $y1
- notecrossings $p $j $i [expr {$j + 1}]
- } elseif {$xj > $xi + $linespc} {
- lappend coords [expr {$xj - $linespc}] $y1
- notecrossings $p $i $j [expr {$j - 1}]
- }
- if {[lsearch -exact $dupparents $p] >= 0} {
- # draw a double-width line to indicate the doubled parent
- lappend coords $xj $y2
- lappend sidelines($p) [list $coords 2 none]
- if {![info exists mainline($p)]} {
- set mainline($p) [list $xj $y2]
- set mainlinearrow($p) none
- }
- } else {
- # normal case, no parent duplicated
- set yb $y2
- set dx [expr {abs($xi - $xj)}]
- if {0 && $dx < $linespc} {
- set yb [expr {$y1 + $dx}]
- }
- if {![info exists mainline($p)]} {
- if {$xi != $xj} {
- lappend coords $xj $yb
- }
- set mainline($p) $coords
- set mainlinearrow($p) none
- } else {
- lappend coords $xj $yb
- if {$yb < $y2} {
- lappend coords $xj $y2
- }
- lappend sidelines($p) [list $coords 1 none]
- }
- }
- }
- } else {
- set j $i
- if {[lindex $displist $i] != $id} {
- set j [lsearch -exact $displist $id]
- }
- if {$j != $i || $xspc1($lineno) != $xspc1($lj)
- || ($olddlevel < $i && $i < $dlevel)
- || ($dlevel < $i && $i < $olddlevel)} {
- set xj [xcoord $j $dlevel $lj]
- lappend mainline($id) $xi $y1 $xj $y2
- }
- }
- }
- return $dlevel
-}
-
-# search for x in a list of lists
-proc llsearch {llist x} {
- set i 0
- foreach l $llist {
- if {$l == $x || [lsearch -exact $l $x] >= 0} {
- return $i
- }
- incr i
- }
- return -1
-}
-
-proc drawmore {reading} {
- global displayorder numcommits ncmupdate nextupdate
- global stopped nhyperspace parents commitlisted
- global maxwidth onscreen displist currentparents olddlevel
-
- set n [llength $displayorder]
- while {$numcommits < $n} {
- set id [lindex $displayorder $numcommits]
- set ctxend [expr {$numcommits + 10}]
- if {!$reading && $ctxend > $n} {
- set ctxend $n
- }
- set dlist {}
- if {$numcommits > 0} {
- set dlist [lreplace $displist $olddlevel $olddlevel]
- set i $olddlevel
- foreach p $currentparents {
- if {$onscreen($p) == 0} {
- set dlist [linsert $dlist $i $p]
- incr i
- }
- }
- }
- set nohs {}
- set reins {}
- set isfat [expr {[llength $dlist] > $maxwidth}]
- if {$nhyperspace > 0 || $isfat} {
- if {$ctxend > $n} break
- # work out what to bring back and
- # what we want to don't want to send into hyperspace
- set room 1
- for {set k $numcommits} {$k < $ctxend} {incr k} {
- set x [lindex $displayorder $k]
- set i [llsearch $dlist $x]
- if {$i < 0} {
- set i [llength $dlist]
- lappend dlist $x
- }
- if {[lsearch -exact $nohs $x] < 0} {
- lappend nohs $x
- }
- if {$reins eq {} && $onscreen($x) < 0 && $room} {
- set reins [list $x $i]
- }
- set newp {}
- if {[info exists commitlisted($x)]} {
- set right 0
- foreach p $parents($x) {
- if {[llsearch $dlist $p] < 0} {
- lappend newp $p
- if {[lsearch -exact $nohs $p] < 0} {
- lappend nohs $p
- }
- if {$reins eq {} && $onscreen($p) < 0 && $room} {
- set reins [list $p [expr {$i + $right}]]
- }
- }
- set right 1
- }
- }
- set l [lindex $dlist $i]
- if {[llength $l] == 1} {
- set l $newp
- } else {
- set j [lsearch -exact $l $x]
- set l [concat [lreplace $l $j $j] $newp]
- }
- set dlist [lreplace $dlist $i $i $l]
- if {$room && $isfat && [llength $newp] <= 1} {
- set room 0
- }
- }
- }
-
- set dlevel [drawslants $id $reins $nohs]
- drawcommitline $dlevel
- if {[clock clicks -milliseconds] >= $nextupdate
- && $numcommits >= $ncmupdate} {
- doupdate $reading
- if {$stopped} break
- }
- }
-}
-
-# level here is an index in todo
-proc updatetodo {level noshortcut} {
- global ncleft todo nnewparents
- global commitlisted parents onscreen
-
- set id [lindex $todo $level]
- set olds {}
- if {[info exists commitlisted($id)]} {
- foreach p $parents($id) {
- if {[lsearch -exact $olds $p] < 0} {
- lappend olds $p
- }
- }
- }
- if {!$noshortcut && [llength $olds] == 1} {
- set p [lindex $olds 0]
- if {$ncleft($p) == 1 && [lsearch -exact $todo $p] < 0} {
- set ncleft($p) 0
- set todo [lreplace $todo $level $level $p]
- set onscreen($p) 0
- set nnewparents($id) 1
- return 0
- }
- }
-
- set todo [lreplace $todo $level $level]
- set i $level
- set n 0
- foreach p $olds {
- incr ncleft($p) -1
- set k [lsearch -exact $todo $p]
- if {$k < 0} {
- set todo [linsert $todo $i $p]
- set onscreen($p) 0
- incr i
- incr n
- }
- }
- set nnewparents($id) $n
-
- return 1
-}
-
-proc decidenext {{noread 0}} {
- global ncleft todo
- global datemode cdate
- global commitinfo
-
- # choose which one to do next time around
- set todol [llength $todo]
- set level -1
- set latest {}
- for {set k $todol} {[incr k -1] >= 0} {} {
- set p [lindex $todo $k]
- if {$ncleft($p) == 0} {
- if {$datemode} {
- if {![info exists commitinfo($p)]} {
- if {$noread} {
- return {}
- }
- readcommit $p
- }
- if {$latest == {} || $cdate($p) > $latest} {
- set level $k
- set latest $cdate($p)
- }
- } else {
- set level $k
- break
- }
- }
- }
-
- return $level
-}
-
-proc drawcommit {id reading} {
- global phase todo nchildren datemode nextupdate revlistorder ncleft
- global numcommits ncmupdate displayorder todo onscreen parents
- global commitlisted commitordered
-
- if {$phase != "incrdraw"} {
- set phase incrdraw
- set displayorder {}
- set todo {}
- initgraph
- catch {unset commitordered}
- }
- set commitordered($id) 1
- if {$nchildren($id) == 0} {
- lappend todo $id
- set onscreen($id) 0
- }
- if {$revlistorder} {
- set level [lsearch -exact $todo $id]
- if {$level < 0} {
- error_popup "oops, $id isn't in todo"
- return
- }
- lappend displayorder $id
- updatetodo $level 0
- } else {
- set level [decidenext 1]
- if {$level == {} || $level < 0} return
- while 1 {
- set id [lindex $todo $level]
- if {![info exists commitordered($id)]} {
- break
- }
- lappend displayorder [lindex $todo $level]
- if {[updatetodo $level $datemode]} {
- set level [decidenext 1]
- if {$level == {} || $level < 0} break
- }
- }
- }
- drawmore $reading
-}
-
proc finishcommits {} {
- global phase oldcommits commits
+ global commitidx phase
global canv mainfont ctext maincursor textcursor
- global parents displayorder todo
+ global findinprogress
- if {$phase == "incrdraw" || $phase == "removecommits"} {
- foreach id $oldcommits {
- lappend commits $id
- drawcommit $id 0
- }
- set oldcommits {}
+ if {$commitidx > 0} {
drawrest
- } elseif {$phase == "updatecommits"} {
- # there were no new commits, in fact
- set commits $oldcommits
- set oldcommits {}
- set phase {}
} else {
$canv delete all
$canv create text 3 3 -anchor nw -text "No commits selected" \
-font $mainfont -tags textitems
- set phase {}
}
- . config -cursor $maincursor
- settextcursor $textcursor
+ if {![info exists findinprogress]} {
+ . config -cursor $maincursor
+ settextcursor $textcursor
+ }
+ set phase {}
}
# Don't change the text pane cursor if it is currently the hand cursor,
set curtextcursor $c
}
-proc drawgraph {} {
- global nextupdate startmsecs ncmupdate
- global displayorder onscreen
-
- if {$displayorder == {}} return
- set startmsecs [clock clicks -milliseconds]
- set nextupdate [expr {$startmsecs + 100}]
- set ncmupdate 1
- initgraph
- foreach id $displayorder {
- set onscreen($id) 0
- }
- drawmore 0
-}
-
proc drawrest {} {
- global phase stopped redisplaying selectedline
- global datemode todo displayorder ncleft
- global numcommits ncmupdate
- global nextupdate startmsecs revlistorder
+ global numcommits
+ global startmsecs
+ global canvy0 numcommits linespc
+ global rowlaidout commitidx
- set level [decidenext]
- if {$level >= 0} {
- set phase drawgraph
- while 1 {
- lappend displayorder [lindex $todo $level]
- set hard [updatetodo $level $datemode]
- if {$hard} {
- set level [decidenext]
- if {$level < 0} break
- }
- }
- }
- if {$todo != {}} {
- puts "ERROR: none of the pending commits can be done yet:"
- foreach p $todo {
- puts " $p ($ncleft($p))"
- }
- }
+ set row $rowlaidout
+ layoutrows $rowlaidout $commitidx 1
+ layouttail
+ optimize_rows $row 0 $commitidx
+ showstuff $commitidx
- drawmore 0
- set phase {}
set drawmsecs [expr {[clock clicks -milliseconds] - $startmsecs}]
#puts "overall $drawmsecs ms for $numcommits commits"
- if {$redisplaying} {
- if {$stopped == 0 && [info exists selectedline]} {
- selectline $selectedline 0
- }
- if {$stopped == 1} {
- set stopped 0
- after idle drawgraph
- } else {
- set redisplaying 0
- }
- }
}
proc findmatches {f} {
proc dofind {} {
global findtype findloc findstring markedmatches commitinfo
- global numcommits lineid linehtag linentag linedtag
+ global numcommits displayorder linehtag linentag linedtag
global mainfont namefont canv canv2 canv3 selectedline
- global matchinglines foundstring foundstrlen
+ global matchinglines foundstring foundstrlen matchstring
+ global commitdata
stopfindproc
unmarkmatches
}
set foundstrlen [string length $findstring]
if {$foundstrlen == 0} return
+ regsub -all {[*?\[\\]} $foundstring {\\&} matchstring
+ set matchstring "*$matchstring*"
if {$findloc == "Files"} {
findfiles
return
}
set didsel 0
set fldtypes {Headline Author Date Committer CDate Comment}
- for {set l 0} {$l < $numcommits} {incr l} {
- set id $lineid($l)
+ set l -1
+ foreach id $displayorder {
+ set d $commitdata($id)
+ incr l
+ if {$findtype == "Regexp"} {
+ set doesmatch [regexp $foundstring $d]
+ } elseif {$findtype == "IgnCase"} {
+ set doesmatch [string match -nocase $matchstring $d]
+ } else {
+ set doesmatch [string match $matchstring $d]
+ }
+ if {!$doesmatch} continue
+ if {![info exists commitinfo($id)]} {
+ getcommit $id
+ }
set info $commitinfo($id)
set doesmatch 0
foreach f $info ty $fldtypes {
if {$matches == {}} continue
set doesmatch 1
if {$ty == "Headline"} {
+ drawcmitrow $l
markmatches $canv $l $f $linehtag($l) $matches $mainfont
} elseif {$ty == "Author"} {
+ drawcmitrow $l
markmatches $canv2 $l $f $linentag($l) $matches $namefont
} elseif {$ty == "Date"} {
+ drawcmitrow $l
markmatches $canv3 $l $f $linedtag($l) $matches $mainfont
}
}
proc findpatches {} {
global findstring selectedline numcommits
global findprocpid findprocfile
- global finddidsel ctext lineid findinprogress
+ global finddidsel ctext displayorder findinprogress
global findinsertpos
if {$numcommits == 0} return
if {[incr l] >= $numcommits} {
set l 0
}
- append inputids $lineid($l) "\n"
+ append inputids [lindex $displayorder $l] "\n"
}
if {[catch {
proc readfindproc {} {
global findprocfile finddidsel
- global idline matchinglines findinsertpos
+ global commitrow matchinglines findinsertpos
set n [gets $findprocfile line]
if {$n < 0} {
stopfindproc
return
}
- if {![info exists idline($id)]} {
+ if {![info exists commitrow($id)]} {
puts stderr "spurious id: $id"
return
}
- set l $idline($id)
+ set l $commitrow($id)
insertmatch $l $id
}
}
proc findfiles {} {
- global selectedline numcommits lineid ctext
+ global selectedline numcommits displayorder ctext
global ffileline finddidsel parents nparents
global findinprogress findstartline findinsertpos
global treediffs fdiffid fdiffsneeded fdiffpos
set diffsneeded {}
set fdiffsneeded {}
while 1 {
- set id $lineid($l)
+ set id [lindex $displayorder $l]
if {$findmergefiles || $nparents($id) == 1} {
if {![info exists treediffs($id)]} {
append diffsneeded "$id\n"
set finddidsel 0
set findinsertpos end
- set id $lineid($l)
+ set id [lindex $displayorder $l]
. config -cursor watch
settextcursor watch
set findinprogress 1
proc findcont {id} {
global findid treediffs parents nparents
global ffileline findstartline finddidsel
- global lineid numcommits matchinglines findinprogress
+ global displayorder numcommits matchinglines findinprogress
global findmergefiles
set l $ffileline
set l 0
}
if {$l == $findstartline} break
- set id $lineid($l)
+ set id [lindex $displayorder $l]
}
stopfindproc
if {!$finddidsel} {
# mark a commit as matching by putting a yellow background
# behind the headline
proc markheadline {l id} {
- global canv mainfont linehtag commitinfo
+ global canv mainfont linehtag
+ drawcmitrow $l
set bbox [$canv bbox $linehtag($l)]
set t [$canv create rect $bbox -outline {} -tags matches -fill yellow]
$canv lower $t
proc selcanvline {w x y} {
global canv canvy0 ctext linespc
- global lineid linehtag linentag linedtag rowtextx
+ global rowtextx
set ymax [lindex [$canv cget -scrollregion] 3]
if {$ymax == {}} return
set yfrac [lindex [$canv yview] 0]
# append some text to the ctext widget, and make any SHA1 ID
# that we know about be a clickable link.
proc appendwithlinks {text} {
- global ctext idline linknum
+ global ctext commitrow linknum
set start [$ctext index "end - 1c"]
$ctext insert end $text
set s [lindex $l 0]
set e [lindex $l 1]
set linkid [string range $text $s $e]
- if {![info exists idline($linkid)]} continue
+ if {![info exists commitrow($linkid)]} continue
incr e
$ctext tag add link "$start + $s c" "$start + $e c"
$ctext tag add link$linknum "$start + $s c" "$start + $e c"
- $ctext tag bind link$linknum <1> [list selectline $idline($linkid) 1]
+ $ctext tag bind link$linknum <1> [list selectline $commitrow($linkid) 1]
incr linknum
}
$ctext tag conf link -foreground blue -underline 1
proc selectline {l isnew} {
global canv canv2 canv3 ctext commitinfo selectedline
- global lineid linehtag linentag linedtag
+ global displayorder linehtag linentag linedtag
global canvy0 linespc parents nparents children
global cflist currentid sha1entry
- global commentend idtags idline linknum
- global mergemax
+ global commentend idtags linknum
+ global mergemax numcommits
$canv delete hover
normalline
- if {![info exists lineid($l)] || ![info exists linehtag($l)]} return
- $canv delete secsel
- set t [eval $canv create rect [$canv bbox $linehtag($l)] -outline {{}} \
- -tags secsel -fill [$canv cget -selectbackground]]
- $canv lower $t
- $canv2 delete secsel
- set t [eval $canv2 create rect [$canv2 bbox $linentag($l)] -outline {{}} \
- -tags secsel -fill [$canv2 cget -selectbackground]]
- $canv2 lower $t
- $canv3 delete secsel
- set t [eval $canv3 create rect [$canv3 bbox $linedtag($l)] -outline {{}} \
- -tags secsel -fill [$canv3 cget -selectbackground]]
- $canv3 lower $t
+ if {$l < 0 || $l >= $numcommits} return
set y [expr {$canvy0 + $l * $linespc}]
set ymax [lindex [$canv cget -scrollregion] 3]
set ytop [expr {$y - $linespc - 1}]
set newtop 0
}
allcanvs yview moveto [expr {$newtop * 1.0 / $ymax}]
+ drawvisible
}
+ if {![info exists linehtag($l)]} return
+ $canv delete secsel
+ set t [eval $canv create rect [$canv bbox $linehtag($l)] -outline {{}} \
+ -tags secsel -fill [$canv cget -selectbackground]]
+ $canv lower $t
+ $canv2 delete secsel
+ set t [eval $canv2 create rect [$canv2 bbox $linentag($l)] -outline {{}} \
+ -tags secsel -fill [$canv2 cget -selectbackground]]
+ $canv2 lower $t
+ $canv3 delete secsel
+ set t [eval $canv3 create rect [$canv3 bbox $linedtag($l)] -outline {{}} \
+ -tags secsel -fill [$canv3 cget -selectbackground]]
+ $canv3 lower $t
+
if {$isnew} {
addtohistory [list selectline $l 0]
}
set selectedline $l
- set id $lineid($l)
+ set id [lindex $displayorder $l]
set currentid $id
$sha1entry delete 0 end
$sha1entry insert 0 $id
proc mergediff {id} {
global parents diffmergeid diffopts mdifffd
- global difffilestart
+ global difffilestart diffids
set diffmergeid $id
+ set diffids $id
catch {unset difffilestart}
# this doesn't seem to actually affect anything...
set env(GIT_DIFF_OPTS) $diffopts
proc getmergediffline {mdf id} {
global diffmergeid ctext cflist nextupdate nparents mergemax
- global difffilestart
+ global difffilestart mdifffd
set n [gets $mdf line]
if {$n < 0} {
}
return
}
- if {![info exists diffmergeid] || $id != $diffmergeid} {
+ if {![info exists diffmergeid] || $id != $diffmergeid
+ || $mdf != $mdifffd($id)} {
return
}
$ctext conf -state normal
set treediffs($ids) $treediff
unset treepending
if {$ids != $diffids} {
- gettreediffs $diffids
- } else {
- if {[info exists diffmergeid]} {
- contmergediff $ids
- } else {
- addtocflist $ids
+ if {![info exists diffmergeid]} {
+ gettreediffs $diffids
}
+ } else {
+ addtocflist $ids
}
return
}
set pad [string range "----------------------------------------" 1 $l]
$ctext insert end "$pad $header $pad\n" filesep
set diffinhdr 1
- } elseif {[regexp {^(---|\+\+\+)} $line]} {
+ } elseif {$diffinhdr && [string compare -length 3 $line "---"] == 0} {
+ # do nothing
+ } elseif {$diffinhdr && [string compare -length 3 $line "+++"] == 0} {
set diffinhdr 0
} elseif {[regexp {^@@ -([0-9]+),([0-9]+) \+([0-9]+),([0-9]+) @@(.*)} \
$line match f1l f1c f2l f2c rest]} {
set linespc [font metrics $mainfont -linespace]
set charspc [font measure $mainfont "m"]
- set canvy0 [expr {3 + 0.5 * $linespc}]
- set canvx0 [expr {3 + 0.5 * $linespc}]
+ set canvy0 [expr {int(3 + 0.5 * $linespc)}]
+ set canvx0 [expr {int(3 + 0.5 * $linespc)}]
set lthickness [expr {int($linespc / 9) + 1}]
set xspc1(0) $linespc
set xspc2 $linespc
}
proc redisplay {} {
- global stopped redisplaying phase
- if {$stopped > 1} return
- if {$phase == "getcommits"} return
- set redisplaying 1
- if {$phase == "drawgraph" || $phase == "incrdraw"} {
- set stopped 1
- } else {
- drawgraph
+ global canv canvy0 linespc numcommits
+ global selectedline
+
+ set ymax [lindex [$canv cget -scrollregion] 3]
+ if {$ymax eq {} || $ymax == 0} return
+ set span [$canv yview]
+ clear_display
+ allcanvs conf -scrollregion \
+ [list 0 0 0 [expr {$canvy0 + ($numcommits - 0.5) * $linespc + 2}]]
+ allcanvs yview moveto [lindex $span 0]
+ drawvisible
+ if {[info exists selectedline]} {
+ selectline $selectedline 0
}
}
}
proc gotocommit {} {
- global sha1string currentid idline tagids
- global lineid numcommits
+ global sha1string currentid commitrow tagids
+ global displayorder numcommits
if {$sha1string == {}
|| ([info exists currentid] && $sha1string == $currentid)} return
set id [string tolower $sha1string]
if {[regexp {^[0-9a-f]{4,39}$} $id]} {
set matches {}
- for {set l 0} {$l < $numcommits} {incr l} {
- if {[string match $id* $lineid($l)]} {
- lappend matches $lineid($l)
+ foreach i $displayorder {
+ if {[string match $id* $i]} {
+ lappend matches $i
}
}
if {$matches ne {}} {
}
}
}
- if {[info exists idline($id)]} {
- selectline $idline($id) 1
+ if {[info exists commitrow($id)]} {
+ selectline $commitrow($id) 1
return
}
if {[regexp {^[0-9a-fA-F]{4,}$} $sha1string]} {
global hoverx hovery hoverid hovertimer
global commitinfo canv
- if {![info exists commitinfo($id)]} return
+ if {![info exists commitinfo($id)] && ![getcommit $id]} return
set hoverx $x
set hovery $y
set hoverid $id
}
proc clickisonarrow {id y} {
- global mainline mainlinearrow sidelines lthickness
+ global lthickness idrowranges
set thresh [expr {2 * $lthickness + 6}]
- if {[info exists mainline($id)]} {
- if {$mainlinearrow($id) ne "none"} {
- if {abs([lindex $mainline($id) 1] - $y) < $thresh} {
- return "up"
- }
- }
- }
- if {[info exists sidelines($id)]} {
- foreach ls $sidelines($id) {
- set coords [lindex $ls 0]
- set arrow [lindex $ls 2]
- if {$arrow eq "first" || $arrow eq "both"} {
- if {abs([lindex $coords 1] - $y) < $thresh} {
- return "up"
- }
- }
- if {$arrow eq "last" || $arrow eq "both"} {
- if {abs([lindex $coords end] - $y) < $thresh} {
- return "down"
- }
- }
+ set n [expr {[llength $idrowranges($id)] - 1}]
+ for {set i 1} {$i < $n} {incr i} {
+ set row [lindex $idrowranges($id) $i]
+ if {abs([yc $row] - $y) < $thresh} {
+ return $i
}
}
return {}
}
-proc arrowjump {id dirn y} {
- global mainline sidelines canv canv2 canv3
+proc arrowjump {id n y} {
+ global idrowranges canv
- set yt {}
- if {$dirn eq "down"} {
- if {[info exists mainline($id)]} {
- set y1 [lindex $mainline($id) 1]
- if {$y1 > $y} {
- set yt $y1
- }
- }
- if {[info exists sidelines($id)]} {
- foreach ls $sidelines($id) {
- set y1 [lindex $ls 0 1]
- if {$y1 > $y && ($yt eq {} || $y1 < $yt)} {
- set yt $y1
- }
- }
- }
- } else {
- if {[info exists sidelines($id)]} {
- foreach ls $sidelines($id) {
- set y1 [lindex $ls 0 end]
- if {$y1 < $y && ($yt eq {} || $y1 > $yt)} {
- set yt $y1
- }
- }
- }
- }
- if {$yt eq {}} return
+ # 1 <-> 2, 3 <-> 4, etc...
+ set n [expr {(($n - 1) ^ 1) + 1}]
+ set row [lindex $idrowranges($id) $n]
+ set yt [yc $row]
set ymax [lindex [$canv cget -scrollregion] 3]
if {$ymax eq {} || $ymax <= 0} return
set view [$canv yview]
if {$yfrac < 0} {
set yfrac 0
}
- $canv yview moveto $yfrac
- $canv2 yview moveto $yfrac
- $canv3 yview moveto $yfrac
+ allcanvs yview moveto $yfrac
}
proc lineclick {x y id isnew} {
global ctext commitinfo children cflist canv thickerline
+ if {![info exists commitinfo($id)] && ![getcommit $id]} return
unmarkmatches
unselectline
normalline
$canv delete hover
# draw this line thicker than normal
- drawlines $id 1 1
set thickerline $id
+ drawlines $id
if {$isnew} {
set ymax [lindex [$canv cget -scrollregion] 3]
if {$ymax eq {}} return
set i 0
foreach child $children($id) {
incr i
+ if {![info exists commitinfo($child)] && ![getcommit $child]} continue
set info $commitinfo($child)
$ctext insert end "\n\t"
$ctext insert end $child [list link link$i]
proc normalline {} {
global thickerline
if {[info exists thickerline]} {
- drawlines $thickerline 0 1
+ set id $thickerline
unset thickerline
+ drawlines $id
}
}
proc selbyid {id} {
- global idline
- if {[info exists idline($id)]} {
- selectline $idline($id) 1
+ global commitrow
+ if {[info exists commitrow($id)]} {
+ selectline $commitrow($id) 1
}
}
}
proc rowmenu {x y id} {
- global rowctxmenu idline selectedline rowmenuid
+ global rowctxmenu commitrow selectedline rowmenuid
- if {![info exists selectedline] || $idline($id) eq $selectedline} {
+ if {![info exists selectedline] || $commitrow($id) eq $selectedline} {
set state disabled
} else {
set state normal
}
proc diffvssel {dirn} {
- global rowmenuid selectedline lineid
+ global rowmenuid selectedline displayorder
if {![info exists selectedline]} return
if {$dirn} {
- set oldid $lineid($selectedline)
+ set oldid [lindex $displayorder $selectedline]
set newid $rowmenuid
} else {
set oldid $rowmenuid
- set newid $lineid($selectedline)
+ set newid [lindex $displayorder $selectedline]
}
addtohistory [list doseldiff $oldid $newid]
doseldiff $oldid $newid
}
proc redrawtags {id} {
- global canv linehtag idline idpos selectedline
+ global canv linehtag commitrow idpos selectedline
- if {![info exists idline($id)]} return
+ if {![info exists commitrow($id)]} return
+ drawcmitrow $commitrow($id)
$canv delete tag.$id
set xt [eval drawtags $id $idpos($id)]
- $canv coords $linehtag($idline($id)) $xt [lindex $idpos($id) 2]
- if {[info exists selectedline] && $selectedline == $idline($id)} {
+ $canv coords $linehtag($commitrow($id)) $xt [lindex $idpos($id) 2]
+ if {[info exists selectedline] && $selectedline == $commitrow($id)} {
selectline $selectedline 0
}
}
set maxwidth 16
set revlistorder 0
set fastdate 0
+set uparrowlen 7
+set downarrowlen 7
+set mingaplen 30
set colors {green red blue magenta darkgrey brown orange}
switch -regexp -- $arg {
"^$" { }
"^-d" { set datemode 1 }
- "^-r" { set revlistorder 1 }
default {
lappend revtreeargs $arg
}
}
}
+# check that we can find a .git directory somewhere...
+set gitdir [gitdir]
+if {![file isdirectory $gitdir]} {
+ error_popup "Cannot find the git directory \"$gitdir\"."
+ exit 1
+}
+
set history {}
set historyindex 0
+set optim_delay 16
+
set stopped 0
-set redisplaying 0
set stuffsaved 0
set patchnum 0
setcoords
#define RANGE_HEADER_SIZE 30
static int got_alternates = -1;
+static int corrupt_object_found = 0;
static struct curl_slist *no_pragma_header;
alt_req->url);
active_requests++;
slot->in_use = 1;
+ if (slot->finished != NULL)
+ (*slot->finished) = 0;
if (!start_active_slot(slot)) {
got_alternates = -1;
slot->in_use = 0;
+ if (slot->finished != NULL)
+ (*slot->finished) = 1;
}
return;
}
obj_req->errorstr, obj_req->curl_result,
obj_req->http_code, hex);
} else if (obj_req->zret != Z_STREAM_END) {
+ corrupt_object_found++;
ret = error("File %s (%s) corrupt", hex, obj_req->url);
} else if (memcmp(obj_req->sha1, obj_req->real_sha1, 20)) {
ret = error("File %s has bad hash", hex);
http_cleanup();
+ if (corrupt_object_found) {
+ fprintf(stderr,
+"Some loose object were found to be corrupt, but they might be just\n"
+"a false '404 Not Found' error message sent with incorrect HTTP\n"
+"status code. Suggest running git fsck-objects.\n");
+ }
return rc;
}
#include "http.h"
#include "refs.h"
#include "revision.h"
+#include "exec_cmd.h"
#include <expat.h>
static const char http_push_usage[] =
-"git-http-push [--complete] [--force] [--verbose] <url> <ref> [<ref>...]\n";
+"git-http-push [--all] [--force] [--verbose] <remote> [<head>...]\n";
#ifndef XML_STATUS_OK
enum XML_Status {
#define XML_STATUS_ERROR 0
#endif
+#define PREV_BUF_SIZE 4096
#define RANGE_HEADER_SIZE 30
/* DAV methods */
#define DAV_PROPFIND "PROPFIND"
#define DAV_PUT "PUT"
#define DAV_UNLOCK "UNLOCK"
+#define DAV_DELETE "DELETE"
/* DAV lock flags */
#define DAV_PROP_LOCKWR (1u << 0)
/* bits #0-4 in revision.h */
-#define LOCAL (1u << 5)
-#define REMOTE (1u << 6)
-#define PUSHING (1u << 7)
+#define LOCAL (1u << 5)
+#define REMOTE (1u << 6)
+#define FETCHING (1u << 7)
+#define PUSHING (1u << 8)
+
+/* We allow "recursive" symbolic refs. Only within reason, though */
+#define MAXDEPTH 5
static int pushing = 0;
static int aborted = 0;
-static char remote_dir_exists[256];
+static signed char remote_dir_exists[256];
static struct curl_slist *no_pragma_header;
static struct curl_slist *default_headers;
{
char *url;
int path_len;
+ int has_info_refs;
+ int can_update_info_refs;
+ int has_info_packs;
struct packed_git *packs;
+ struct remote_lock *locks;
};
static struct repo *remote = NULL;
-static struct remote_lock *remote_locks = NULL;
enum transfer_state {
+ NEED_FETCH,
+ RUN_FETCH_LOOSE,
+ RUN_FETCH_PACKED,
NEED_PUSH,
RUN_MKCOL,
RUN_PUT,
struct buffer buffer;
char filename[PATH_MAX];
char tmpfile[PATH_MAX];
+ int local_fileno;
+ FILE *local_stream;
enum transfer_state state;
CURLcode curl_result;
char errorstr[CURL_ERROR_SIZE];
z_stream stream;
int zret;
int rename;
+ void *userData;
struct active_request_slot *slot;
struct transfer_request *next;
};
char *token;
time_t start_time;
long timeout;
- int active;
int refreshing;
struct remote_lock *next;
};
-struct remote_dentry
+/* Flags that control remote_ls processing */
+#define PROCESS_FILES (1u << 0)
+#define PROCESS_DIRS (1u << 1)
+#define RECURSIVE (1u << 2)
+
+/* Flags that remote_ls passes to callback functions */
+#define IS_DIR (1u << 0)
+
+struct remote_ls_ctx
{
- char *base;
- char *name;
- int is_dir;
+ char *path;
+ void (*userFunc)(struct remote_ls_ctx *ls);
+ void *userData;
+ int flags;
+ char *dentry_name;
+ int dentry_flags;
+ struct remote_ls_ctx *parent;
};
static void finish_request(struct transfer_request *request);
+static void release_request(struct transfer_request *request);
static void process_response(void *callback_data)
{
finish_request(request);
}
+static size_t fwrite_sha1_file(void *ptr, size_t eltsize, size_t nmemb,
+ void *data)
+{
+ unsigned char expn[4096];
+ size_t size = eltsize * nmemb;
+ int posn = 0;
+ struct transfer_request *request = (struct transfer_request *)data;
+ do {
+ ssize_t retval = write(request->local_fileno,
+ ptr + posn, size - posn);
+ if (retval < 0)
+ return posn;
+ posn += retval;
+ } while (posn < size);
+
+ request->stream.avail_in = size;
+ request->stream.next_in = ptr;
+ do {
+ request->stream.next_out = expn;
+ request->stream.avail_out = sizeof(expn);
+ request->zret = inflate(&request->stream, Z_SYNC_FLUSH);
+ SHA1_Update(&request->c, expn,
+ sizeof(expn) - request->stream.avail_out);
+ } while (request->stream.avail_in && request->zret == Z_OK);
+ data_received++;
+ return size;
+}
+
+static void start_fetch_loose(struct transfer_request *request)
+{
+ char *hex = sha1_to_hex(request->obj->sha1);
+ char *filename;
+ char prevfile[PATH_MAX];
+ char *url;
+ char *posn;
+ int prevlocal;
+ unsigned char prev_buf[PREV_BUF_SIZE];
+ ssize_t prev_read = 0;
+ long prev_posn = 0;
+ char range[RANGE_HEADER_SIZE];
+ struct curl_slist *range_header = NULL;
+ struct active_request_slot *slot;
+
+ filename = sha1_file_name(request->obj->sha1);
+ snprintf(request->filename, sizeof(request->filename), "%s", filename);
+ snprintf(request->tmpfile, sizeof(request->tmpfile),
+ "%s.temp", filename);
+
+ snprintf(prevfile, sizeof(prevfile), "%s.prev", request->filename);
+ unlink(prevfile);
+ rename(request->tmpfile, prevfile);
+ unlink(request->tmpfile);
+
+ if (request->local_fileno != -1)
+ error("fd leakage in start: %d", request->local_fileno);
+ request->local_fileno = open(request->tmpfile,
+ O_WRONLY | O_CREAT | O_EXCL, 0666);
+ /* This could have failed due to the "lazy directory creation";
+ * try to mkdir the last path component.
+ */
+ if (request->local_fileno < 0 && errno == ENOENT) {
+ char *dir = strrchr(request->tmpfile, '/');
+ if (dir) {
+ *dir = 0;
+ mkdir(request->tmpfile, 0777);
+ *dir = '/';
+ }
+ request->local_fileno = open(request->tmpfile,
+ O_WRONLY | O_CREAT | O_EXCL, 0666);
+ }
+
+ if (request->local_fileno < 0) {
+ request->state = ABORTED;
+ error("Couldn't create temporary file %s for %s: %s",
+ request->tmpfile, request->filename, strerror(errno));
+ return;
+ }
+
+ memset(&request->stream, 0, sizeof(request->stream));
+
+ inflateInit(&request->stream);
+
+ SHA1_Init(&request->c);
+
+ url = xmalloc(strlen(remote->url) + 50);
+ request->url = xmalloc(strlen(remote->url) + 50);
+ strcpy(url, remote->url);
+ posn = url + strlen(remote->url);
+ strcpy(posn, "objects/");
+ posn += 8;
+ memcpy(posn, hex, 2);
+ posn += 2;
+ *(posn++) = '/';
+ strcpy(posn, hex + 2);
+ strcpy(request->url, url);
+
+ /* If a previous temp file is present, process what was already
+ fetched. */
+ prevlocal = open(prevfile, O_RDONLY);
+ if (prevlocal != -1) {
+ do {
+ prev_read = read(prevlocal, prev_buf, PREV_BUF_SIZE);
+ if (prev_read>0) {
+ if (fwrite_sha1_file(prev_buf,
+ 1,
+ prev_read,
+ request) == prev_read) {
+ prev_posn += prev_read;
+ } else {
+ prev_read = -1;
+ }
+ }
+ } while (prev_read > 0);
+ close(prevlocal);
+ }
+ unlink(prevfile);
+
+ /* Reset inflate/SHA1 if there was an error reading the previous temp
+ file; also rewind to the beginning of the local file. */
+ if (prev_read == -1) {
+ memset(&request->stream, 0, sizeof(request->stream));
+ inflateInit(&request->stream);
+ SHA1_Init(&request->c);
+ if (prev_posn>0) {
+ prev_posn = 0;
+ lseek(request->local_fileno, SEEK_SET, 0);
+ ftruncate(request->local_fileno, 0);
+ }
+ }
+
+ slot = get_active_slot();
+ slot->callback_func = process_response;
+ slot->callback_data = request;
+ request->slot = slot;
+
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, request);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_sha1_file);
+ curl_easy_setopt(slot->curl, CURLOPT_ERRORBUFFER, request->errorstr);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, no_pragma_header);
+
+ /* If we have successfully processed data from a previous fetch
+ attempt, only fetch the data we don't already have. */
+ if (prev_posn>0) {
+ if (push_verbosely)
+ fprintf(stderr,
+ "Resuming fetch of object %s at byte %ld\n",
+ hex, prev_posn);
+ sprintf(range, "Range: bytes=%ld-", prev_posn);
+ range_header = curl_slist_append(range_header, range);
+ curl_easy_setopt(slot->curl,
+ CURLOPT_HTTPHEADER, range_header);
+ }
+
+ /* Try to get the request started, abort the request on error */
+ request->state = RUN_FETCH_LOOSE;
+ if (!start_active_slot(slot)) {
+ fprintf(stderr, "Unable to start GET request\n");
+ remote->can_update_info_refs = 0;
+ release_request(request);
+ }
+}
+
+static void start_fetch_packed(struct transfer_request *request)
+{
+ char *url;
+ struct packed_git *target;
+ FILE *packfile;
+ char *filename;
+ long prev_posn = 0;
+ char range[RANGE_HEADER_SIZE];
+ struct curl_slist *range_header = NULL;
+
+ struct transfer_request *check_request = request_queue_head;
+ struct active_request_slot *slot;
+
+ target = find_sha1_pack(request->obj->sha1, remote->packs);
+ if (!target) {
+ fprintf(stderr, "Unable to fetch %s, will not be able to update server info refs\n", sha1_to_hex(request->obj->sha1));
+ remote->can_update_info_refs = 0;
+ release_request(request);
+ return;
+ }
+
+ fprintf(stderr, "Fetching pack %s\n", sha1_to_hex(target->sha1));
+ fprintf(stderr, " which contains %s\n", sha1_to_hex(request->obj->sha1));
+
+ filename = sha1_pack_name(target->sha1);
+ snprintf(request->filename, sizeof(request->filename), "%s", filename);
+ snprintf(request->tmpfile, sizeof(request->tmpfile),
+ "%s.temp", filename);
+
+ url = xmalloc(strlen(remote->url) + 64);
+ sprintf(url, "%sobjects/pack/pack-%s.pack",
+ remote->url, sha1_to_hex(target->sha1));
+
+ /* Make sure there isn't another open request for this pack */
+ while (check_request) {
+ if (check_request->state == RUN_FETCH_PACKED &&
+ !strcmp(check_request->url, url)) {
+ free(url);
+ release_request(request);
+ return;
+ }
+ check_request = check_request->next;
+ }
+
+ packfile = fopen(request->tmpfile, "a");
+ if (!packfile) {
+ fprintf(stderr, "Unable to open local file %s for pack",
+ filename);
+ remote->can_update_info_refs = 0;
+ free(url);
+ return;
+ }
+
+ slot = get_active_slot();
+ slot->callback_func = process_response;
+ slot->callback_data = request;
+ request->slot = slot;
+ request->local_stream = packfile;
+ request->userData = target;
+
+ request->url = url;
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, packfile);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, no_pragma_header);
+ slot->local = packfile;
+
+ /* If there is data present from a previous transfer attempt,
+ resume where it left off */
+ prev_posn = ftell(packfile);
+ if (prev_posn>0) {
+ if (push_verbosely)
+ fprintf(stderr,
+ "Resuming fetch of pack %s at byte %ld\n",
+ sha1_to_hex(target->sha1), prev_posn);
+ sprintf(range, "Range: bytes=%ld-", prev_posn);
+ range_header = curl_slist_append(range_header, range);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, range_header);
+ }
+
+ /* Try to get the request started, abort the request on error */
+ request->state = RUN_FETCH_PACKED;
+ if (!start_active_slot(slot)) {
+ fprintf(stderr, "Unable to start GET request\n");
+ remote->can_update_info_refs = 0;
+ release_request(request);
+ }
+}
+
static void start_mkcol(struct transfer_request *request)
{
char *hex = sha1_to_hex(request->obj->sha1);
}
}
-static int refresh_lock(struct remote_lock *check_lock)
+static int refresh_lock(struct remote_lock *lock)
{
struct active_request_slot *slot;
+ struct slot_results results;
char *if_header;
char timeout_header[25];
struct curl_slist *dav_headers = NULL;
- struct remote_lock *lock;
- int time_remaining;
- time_t current_time;
+ int rc = 0;
- /* Refresh all active locks if they're close to expiring */
- for (lock = remote_locks; lock; lock = lock->next) {
- if (!lock->active)
- continue;
+ lock->refreshing = 1;
- current_time = time(NULL);
- time_remaining = lock->start_time + lock->timeout
- - current_time;
- if (time_remaining > LOCK_REFRESH)
- continue;
+ if_header = xmalloc(strlen(lock->token) + 25);
+ sprintf(if_header, "If: (<opaquelocktoken:%s>)", lock->token);
+ sprintf(timeout_header, "Timeout: Second-%ld", lock->timeout);
+ dav_headers = curl_slist_append(dav_headers, if_header);
+ dav_headers = curl_slist_append(dav_headers, timeout_header);
+
+ slot = get_active_slot();
+ slot->results = &results;
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPGET, 1);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_null);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, lock->url);
+ curl_easy_setopt(slot->curl, CURLOPT_CUSTOMREQUEST, DAV_LOCK);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, dav_headers);
- lock->refreshing = 1;
+ if (start_active_slot(slot)) {
+ run_active_slot(slot);
+ if (results.curl_result != CURLE_OK) {
+ fprintf(stderr, "LOCK HTTP error %ld\n",
+ results.http_code);
+ } else {
+ lock->start_time = time(NULL);
+ rc = 1;
+ }
+ }
- if_header = xmalloc(strlen(lock->token) + 25);
- sprintf(if_header, "If: (<opaquelocktoken:%s>)", lock->token);
- sprintf(timeout_header, "Timeout: Second-%ld", lock->timeout);
- dav_headers = curl_slist_append(dav_headers, if_header);
- dav_headers = curl_slist_append(dav_headers, timeout_header);
+ lock->refreshing = 0;
+ curl_slist_free_all(dav_headers);
+ free(if_header);
- slot = get_active_slot();
- curl_easy_setopt(slot->curl, CURLOPT_HTTPGET, 1);
- curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_null);
- curl_easy_setopt(slot->curl, CURLOPT_URL, lock->url);
- curl_easy_setopt(slot->curl, CURLOPT_CUSTOMREQUEST, DAV_LOCK);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, dav_headers);
+ return rc;
+}
- if (start_active_slot(slot)) {
- run_active_slot(slot);
- if (slot->curl_result != CURLE_OK) {
- fprintf(stderr, "Got HTTP error %ld\n", slot->http_code);
- lock->active = 0;
- } else {
- lock->active = 1;
- lock->start_time = time(NULL);
+static void check_locks()
+{
+ struct remote_lock *lock = remote->locks;
+ time_t current_time = time(NULL);
+ int time_remaining;
+
+ while (lock) {
+ time_remaining = lock->start_time + lock->timeout -
+ current_time;
+ if (!lock->refreshing && time_remaining < LOCK_REFRESH) {
+ if (!refresh_lock(lock)) {
+ fprintf(stderr,
+ "Unable to refresh lock for %s\n",
+ lock->url);
+ aborted = 1;
+ return;
}
}
-
- lock->refreshing = 0;
- curl_slist_free_all(dav_headers);
- free(if_header);
+ lock = lock->next;
}
-
- if (check_lock)
- return check_lock->active;
- else
- return 0;
}
static void release_request(struct transfer_request *request)
entry->next = entry->next->next;
}
+ if (request->local_fileno != -1)
+ close(request->local_fileno);
+ if (request->local_stream)
+ fclose(request->local_stream);
if (request->url != NULL)
free(request->url);
free(request);
static void finish_request(struct transfer_request *request)
{
- request->curl_result = request->slot->curl_result;
+ struct stat st;
+ struct packed_git *target;
+ struct packed_git **lst;
+
+ request->curl_result = request->slot->curl_result;
request->http_code = request->slot->http_code;
request->slot = NULL;
/* Keep locks active */
- refresh_lock(request->lock);
+ check_locks();
if (request->headers != NULL)
curl_slist_free_all(request->headers);
}
} else if (request->state == RUN_MOVE) {
if (request->curl_result == CURLE_OK) {
- fprintf(stderr, " sent %s\n",
- sha1_to_hex(request->obj->sha1));
- request->state = COMPLETE;
+ if (push_verbosely)
+ fprintf(stderr, " sent %s\n",
+ sha1_to_hex(request->obj->sha1));
request->obj->flags |= REMOTE;
release_request(request);
} else {
request->state = ABORTED;
aborted = 1;
}
+ } else if (request->state == RUN_FETCH_LOOSE) {
+ fchmod(request->local_fileno, 0444);
+ close(request->local_fileno); request->local_fileno = -1;
+
+ if (request->curl_result != CURLE_OK &&
+ request->http_code != 416) {
+ if (stat(request->tmpfile, &st) == 0) {
+ if (st.st_size == 0)
+ unlink(request->tmpfile);
+ }
+ } else {
+ if (request->http_code == 416)
+ fprintf(stderr, "Warning: requested range invalid; we may already have all the data.\n");
+
+ inflateEnd(&request->stream);
+ SHA1_Final(request->real_sha1, &request->c);
+ if (request->zret != Z_STREAM_END) {
+ unlink(request->tmpfile);
+ } else if (memcmp(request->obj->sha1, request->real_sha1, 20)) {
+ unlink(request->tmpfile);
+ } else {
+ request->rename =
+ move_temp_to_file(
+ request->tmpfile,
+ request->filename);
+ if (request->rename == 0) {
+ request->obj->flags |= (LOCAL | REMOTE);
+ }
+ }
+ }
+
+ /* Try fetching packed if necessary */
+ if (request->obj->flags & LOCAL)
+ release_request(request);
+ else
+ start_fetch_packed(request);
+
+ } else if (request->state == RUN_FETCH_PACKED) {
+ if (request->curl_result != CURLE_OK) {
+ fprintf(stderr, "Unable to get pack file %s\n%s",
+ request->url, curl_errorstr);
+ remote->can_update_info_refs = 0;
+ } else {
+ fclose(request->local_stream);
+ request->local_stream = NULL;
+ if (!move_temp_to_file(request->tmpfile,
+ request->filename)) {
+ target = (struct packed_git *)request->userData;
+ lst = &remote->packs;
+ while (*lst != target)
+ lst = &((*lst)->next);
+ *lst = (*lst)->next;
+
+ if (!verify_pack(target, 0))
+ install_packed_git(target);
+ else
+ remote->can_update_info_refs = 0;
+ }
+ }
+ release_request(request);
}
}
void fill_active_slots(void)
{
struct transfer_request *request = request_queue_head;
+ struct transfer_request *next;
struct active_request_slot *slot = active_queue_head;
int num_transfers;
return;
while (active_requests < max_requests && request != NULL) {
- if (pushing && request->state == NEED_PUSH) {
+ next = request->next;
+ if (request->state == NEED_FETCH) {
+ start_fetch_loose(request);
+ } else if (pushing && request->state == NEED_PUSH) {
if (remote_dir_exists[request->obj->sha1[0]] == 1) {
start_put(request);
} else {
}
curl_multi_perform(curlm, &num_transfers);
}
- request = request->next;
+ request = next;
}
while (slot != NULL) {
static void get_remote_object_list(unsigned char parent);
-static void add_request(struct object *obj, struct remote_lock *lock)
+static void add_fetch_request(struct object *obj)
+{
+ struct transfer_request *request;
+
+ check_locks();
+
+ /*
+ * Don't fetch the object if it's known to exist locally
+ * or is already in the request queue
+ */
+ if (remote_dir_exists[obj->sha1[0]] == -1)
+ get_remote_object_list(obj->sha1[0]);
+ if (obj->flags & (LOCAL | FETCHING))
+ return;
+
+ obj->flags |= FETCHING;
+ request = xmalloc(sizeof(*request));
+ request->obj = obj;
+ request->url = NULL;
+ request->lock = NULL;
+ request->headers = NULL;
+ request->local_fileno = -1;
+ request->local_stream = NULL;
+ request->state = NEED_FETCH;
+ request->next = request_queue_head;
+ request_queue_head = request;
+
+ fill_active_slots();
+ step_active_slots();
+}
+
+static int add_send_request(struct object *obj, struct remote_lock *lock)
{
struct transfer_request *request = request_queue_head;
struct packed_git *target;
+ /* Keep locks active */
+ check_locks();
+
/*
* Don't push the object if it's known to exist on the remote
* or is already in the request queue
if (remote_dir_exists[obj->sha1[0]] == -1)
get_remote_object_list(obj->sha1[0]);
if (obj->flags & (REMOTE | PUSHING))
- return;
+ return 0;
target = find_sha1_pack(obj->sha1, remote->packs);
if (target) {
obj->flags |= REMOTE;
- return;
+ return 0;
}
obj->flags |= PUSHING;
request->url = NULL;
request->lock = lock;
request->headers = NULL;
+ request->local_fileno = -1;
+ request->local_stream = NULL;
request->state = NEED_PUSH;
request->next = request_queue_head;
request_queue_head = request;
fill_active_slots();
step_active_slots();
+
+ return 1;
}
static int fetch_index(unsigned char *sha1)
FILE *indexfile;
struct active_request_slot *slot;
+ struct slot_results results;
/* Don't use the index if the pack isn't there */
- url = xmalloc(strlen(remote->url) + 65);
- sprintf(url, "%s/objects/pack/pack-%s.pack", remote->url, hex);
+ url = xmalloc(strlen(remote->url) + 64);
+ sprintf(url, "%sobjects/pack/pack-%s.pack", remote->url, hex);
slot = get_active_slot();
+ slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
curl_easy_setopt(slot->curl, CURLOPT_NOBODY, 1);
if (start_active_slot(slot)) {
run_active_slot(slot);
- if (slot->curl_result != CURLE_OK) {
+ if (results.curl_result != CURLE_OK) {
free(url);
return error("Unable to verify pack %s is available",
hex);
if (push_verbosely)
fprintf(stderr, "Getting index for pack %s\n", hex);
-
- sprintf(url, "%s/objects/pack/pack-%s.idx", remote->url, hex);
-
+
+ sprintf(url, "%sobjects/pack/pack-%s.idx", remote->url, hex);
+
filename = sha1_pack_index_name(sha1);
snprintf(tmpfile, sizeof(tmpfile), "%s.temp", filename);
indexfile = fopen(tmpfile, "a");
filename);
slot = get_active_slot();
+ slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_NOBODY, 0);
curl_easy_setopt(slot->curl, CURLOPT_HTTPGET, 1);
curl_easy_setopt(slot->curl, CURLOPT_FILE, indexfile);
if (start_active_slot(slot)) {
run_active_slot(slot);
- if (slot->curl_result != CURLE_OK) {
+ if (results.curl_result != CURLE_OK) {
free(url);
fclose(indexfile);
return error("Unable to get pack index %s\n%s", url,
int i = 0;
struct active_request_slot *slot;
+ struct slot_results results;
data = xmalloc(4096);
memset(data, 0, 4096);
if (push_verbosely)
fprintf(stderr, "Getting pack list\n");
-
- url = xmalloc(strlen(remote->url) + 21);
- sprintf(url, "%s/objects/info/packs", remote->url);
+
+ url = xmalloc(strlen(remote->url) + 20);
+ sprintf(url, "%sobjects/info/packs", remote->url);
slot = get_active_slot();
+ slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
if (start_active_slot(slot)) {
run_active_slot(slot);
- if (slot->curl_result != CURLE_OK) {
+ if (results.curl_result != CURLE_OK) {
free(buffer.buffer);
free(url);
- if (slot->http_code == 404)
+ if (results.http_code == 404)
return 0;
else
return error("%s", curl_errorstr);
struct buffer buffer;
char *base = remote->url;
struct active_request_slot *slot;
+ struct slot_results results;
buffer.size = 41;
buffer.posn = 0;
buffer.buffer = hex;
hex[41] = '\0';
-
+
url = quote_ref_url(base, ref);
slot = get_active_slot();
+ slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
if (start_active_slot(slot)) {
run_active_slot(slot);
- if (slot->curl_result != CURLE_OK)
+ if (results.curl_result != CURLE_OK)
return error("Couldn't get %s for %s\n%s",
url, ref, curl_errorstr);
} else {
}
static void one_remote_ref(char *refname);
-static void crawl_remote_refs(char *path);
-
-static void handle_crawl_ref_ctx(struct xml_ctx *ctx, int tag_closed)
-{
- struct remote_dentry *dentry = (struct remote_dentry *)ctx->userData;
-
-
- if (tag_closed) {
- if (!strcmp(ctx->name, DAV_PROPFIND_RESP) && dentry->name) {
- if (dentry->is_dir) {
- if (strcmp(dentry->name, dentry->base)) {
- crawl_remote_refs(dentry->name);
- }
- } else {
- one_remote_ref(dentry->name);
- }
- } else if (!strcmp(ctx->name, DAV_PROPFIND_NAME) && ctx->cdata) {
- dentry->name = xmalloc(strlen(ctx->cdata) -
- remote->path_len + 1);
- strcpy(dentry->name,
- ctx->cdata + remote->path_len);
- } else if (!strcmp(ctx->name, DAV_PROPFIND_COLLECTION)) {
- dentry->is_dir = 1;
- }
- } else if (!strcmp(ctx->name, DAV_PROPFIND_RESP)) {
- dentry->name = NULL;
- dentry->is_dir = 0;
- }
-}
-
-static void handle_remote_object_list_ctx(struct xml_ctx *ctx, int tag_closed)
-{
- char *path;
- char *obj_hex;
-
- if (tag_closed) {
- if (!strcmp(ctx->name, DAV_PROPFIND_NAME) && ctx->cdata) {
- path = ctx->cdata + remote->path_len;
- if (strlen(path) != 50)
- return;
- path += 9;
- obj_hex = xmalloc(strlen(path));
- strncpy(obj_hex, path, 2);
- strcpy(obj_hex + 2, path + 3);
- one_remote_object(obj_hex);
- free(obj_hex);
- }
- }
-}
static void
xml_start_tag(void *userData, const char *name, const char **atts)
static struct remote_lock *lock_remote(char *path, long timeout)
{
struct active_request_slot *slot;
+ struct slot_results results;
struct buffer out_buffer;
struct buffer in_buffer;
char *out_data;
char *url;
char *ep;
char timeout_header[25];
- struct remote_lock *lock = remote_locks;
+ struct remote_lock *lock = NULL;
XML_Parser parser = XML_ParserCreate(NULL);
enum XML_Status result;
struct curl_slist *dav_headers = NULL;
url = xmalloc(strlen(remote->url) + strlen(path) + 1);
sprintf(url, "%s%s", remote->url, path);
- /* Make sure the url is not already locked */
- while (lock && strcmp(lock->url, url)) {
- lock = lock->next;
- }
- if (lock) {
- free(url);
- if (refresh_lock(lock))
- return lock;
- else
- return NULL;
- }
-
/* Make sure leading directories exist for the remote ref */
ep = strchr(url + strlen(remote->url) + 11, '/');
while (ep) {
*ep = 0;
slot = get_active_slot();
+ slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_HTTPGET, 1);
curl_easy_setopt(slot->curl, CURLOPT_URL, url);
curl_easy_setopt(slot->curl, CURLOPT_CUSTOMREQUEST, DAV_MKCOL);
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_null);
if (start_active_slot(slot)) {
run_active_slot(slot);
- if (slot->curl_result != CURLE_OK &&
- slot->http_code != 405) {
+ if (results.curl_result != CURLE_OK &&
+ results.http_code != 405) {
fprintf(stderr,
"Unable to create branch path %s\n",
url);
return NULL;
}
} else {
- fprintf(stderr, "Unable to start request\n");
+ fprintf(stderr, "Unable to start MKCOL request\n");
free(url);
return NULL;
}
dav_headers = curl_slist_append(dav_headers, "Content-Type: text/xml");
slot = get_active_slot();
+ slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_INFILE, &out_buffer);
curl_easy_setopt(slot->curl, CURLOPT_INFILESIZE, out_buffer.size);
curl_easy_setopt(slot->curl, CURLOPT_READFUNCTION, fread_buffer);
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, dav_headers);
lock = xcalloc(1, sizeof(*lock));
- lock->owner = NULL;
- lock->token = NULL;
lock->timeout = -1;
- lock->refreshing = 0;
if (start_active_slot(slot)) {
run_active_slot(slot);
- if (slot->curl_result == CURLE_OK) {
+ if (results.curl_result == CURLE_OK) {
ctx.name = xcalloc(10, 1);
ctx.len = 0;
ctx.cdata = NULL;
}
}
} else {
- fprintf(stderr, "Unable to start request\n");
+ fprintf(stderr, "Unable to start LOCK request\n");
}
curl_slist_free_all(dav_headers);
lock = NULL;
} else {
lock->url = url;
- lock->active = 1;
lock->start_time = time(NULL);
- lock->next = remote_locks;
- remote_locks = lock;
+ lock->next = remote->locks;
+ remote->locks = lock;
}
return lock;
static int unlock_remote(struct remote_lock *lock)
{
struct active_request_slot *slot;
+ struct slot_results results;
+ struct remote_lock *prev = remote->locks;
char *lock_token_header;
struct curl_slist *dav_headers = NULL;
int rc = 0;
dav_headers = curl_slist_append(dav_headers, lock_token_header);
slot = get_active_slot();
+ slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_null);
curl_easy_setopt(slot->curl, CURLOPT_URL, lock->url);
curl_easy_setopt(slot->curl, CURLOPT_CUSTOMREQUEST, DAV_UNLOCK);
if (start_active_slot(slot)) {
run_active_slot(slot);
- if (slot->curl_result == CURLE_OK)
+ if (results.curl_result == CURLE_OK)
rc = 1;
else
- fprintf(stderr, "Got HTTP error %ld\n",
- slot->http_code);
+ fprintf(stderr, "UNLOCK HTTP error %ld\n",
+ results.http_code);
} else {
- fprintf(stderr, "Unable to start request\n");
+ fprintf(stderr, "Unable to start UNLOCK request\n");
}
curl_slist_free_all(dav_headers);
free(lock_token_header);
- lock->active = 0;
+ if (remote->locks == lock) {
+ remote->locks = lock->next;
+ } else {
+ while (prev && prev->next != lock)
+ prev = prev->next;
+ if (prev)
+ prev->next = prev->next->next;
+ }
+
+ if (lock->owner != NULL)
+ free(lock->owner);
+ free(lock->url);
+ free(lock->token);
+ free(lock);
return rc;
}
-static void crawl_remote_refs(char *path)
-{
- char *url;
- struct active_request_slot *slot;
- struct buffer in_buffer;
- struct buffer out_buffer;
- char *in_data;
- char *out_data;
- XML_Parser parser = XML_ParserCreate(NULL);
- enum XML_Status result;
- struct curl_slist *dav_headers = NULL;
- struct xml_ctx ctx;
- struct remote_dentry dentry;
-
- fprintf(stderr, " %s\n", path);
-
- dentry.base = path;
- dentry.name = NULL;
- dentry.is_dir = 0;
+static void remote_ls(const char *path, int flags,
+ void (*userFunc)(struct remote_ls_ctx *ls),
+ void *userData);
- url = xmalloc(strlen(remote->url) + strlen(path) + 1);
- sprintf(url, "%s%s", remote->url, path);
+static void process_ls_object(struct remote_ls_ctx *ls)
+{
+ unsigned int *parent = (unsigned int *)ls->userData;
+ char *path = ls->dentry_name;
+ char *obj_hex;
- out_buffer.size = strlen(PROPFIND_ALL_REQUEST);
- out_data = xmalloc(out_buffer.size + 1);
- snprintf(out_data, out_buffer.size + 1, PROPFIND_ALL_REQUEST);
- out_buffer.posn = 0;
- out_buffer.buffer = out_data;
+ if (!strcmp(ls->path, ls->dentry_name) && (ls->flags & IS_DIR)) {
+ remote_dir_exists[*parent] = 1;
+ return;
+ }
- in_buffer.size = 4096;
- in_data = xmalloc(in_buffer.size);
- in_buffer.posn = 0;
- in_buffer.buffer = in_data;
+ if (strlen(path) != 49)
+ return;
+ path += 8;
+ obj_hex = xmalloc(strlen(path));
+ strncpy(obj_hex, path, 2);
+ strcpy(obj_hex + 2, path + 3);
+ one_remote_object(obj_hex);
+ free(obj_hex);
+}
- dav_headers = curl_slist_append(dav_headers, "Depth: 1");
- dav_headers = curl_slist_append(dav_headers, "Content-Type: text/xml");
+static void process_ls_ref(struct remote_ls_ctx *ls)
+{
+ if (!strcmp(ls->path, ls->dentry_name) && (ls->dentry_flags & IS_DIR)) {
+ fprintf(stderr, " %s\n", ls->dentry_name);
+ return;
+ }
- slot = get_active_slot();
- curl_easy_setopt(slot->curl, CURLOPT_INFILE, &out_buffer);
- curl_easy_setopt(slot->curl, CURLOPT_INFILESIZE, out_buffer.size);
- curl_easy_setopt(slot->curl, CURLOPT_READFUNCTION, fread_buffer);
- curl_easy_setopt(slot->curl, CURLOPT_FILE, &in_buffer);
- curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
- curl_easy_setopt(slot->curl, CURLOPT_URL, url);
- curl_easy_setopt(slot->curl, CURLOPT_UPLOAD, 1);
- curl_easy_setopt(slot->curl, CURLOPT_CUSTOMREQUEST, DAV_PROPFIND);
- curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, dav_headers);
+ if (!(ls->dentry_flags & IS_DIR))
+ one_remote_ref(ls->dentry_name);
+}
- if (start_active_slot(slot)) {
- run_active_slot(slot);
- if (slot->curl_result == CURLE_OK) {
- ctx.name = xcalloc(10, 1);
- ctx.len = 0;
- ctx.cdata = NULL;
- ctx.userFunc = handle_crawl_ref_ctx;
- ctx.userData = &dentry;
- XML_SetUserData(parser, &ctx);
- XML_SetElementHandler(parser, xml_start_tag,
- xml_end_tag);
- XML_SetCharacterDataHandler(parser, xml_cdata);
- result = XML_Parse(parser, in_buffer.buffer,
- in_buffer.posn, 1);
- free(ctx.name);
+static void handle_remote_ls_ctx(struct xml_ctx *ctx, int tag_closed)
+{
+ struct remote_ls_ctx *ls = (struct remote_ls_ctx *)ctx->userData;
- if (result != XML_STATUS_OK) {
- fprintf(stderr, "XML error: %s\n",
- XML_ErrorString(
- XML_GetErrorCode(parser)));
+ if (tag_closed) {
+ if (!strcmp(ctx->name, DAV_PROPFIND_RESP) && ls->dentry_name) {
+ if (ls->dentry_flags & IS_DIR) {
+ if (ls->flags & PROCESS_DIRS) {
+ ls->userFunc(ls);
+ }
+ if (strcmp(ls->dentry_name, ls->path) &&
+ ls->flags & RECURSIVE) {
+ remote_ls(ls->dentry_name,
+ ls->flags,
+ ls->userFunc,
+ ls->userData);
+ }
+ } else if (ls->flags & PROCESS_FILES) {
+ ls->userFunc(ls);
}
+ } else if (!strcmp(ctx->name, DAV_PROPFIND_NAME) && ctx->cdata) {
+ ls->dentry_name = xmalloc(strlen(ctx->cdata) -
+ remote->path_len + 1);
+ strcpy(ls->dentry_name, ctx->cdata + remote->path_len);
+ } else if (!strcmp(ctx->name, DAV_PROPFIND_COLLECTION)) {
+ ls->dentry_flags |= IS_DIR;
}
- } else {
- fprintf(stderr, "Unable to start request\n");
+ } else if (!strcmp(ctx->name, DAV_PROPFIND_RESP)) {
+ if (ls->dentry_name) {
+ free(ls->dentry_name);
+ }
+ ls->dentry_name = NULL;
+ ls->dentry_flags = 0;
}
-
- free(url);
- free(out_data);
- free(in_buffer.buffer);
- curl_slist_free_all(dav_headers);
}
-static void get_remote_object_list(unsigned char parent)
+static void remote_ls(const char *path, int flags,
+ void (*userFunc)(struct remote_ls_ctx *ls),
+ void *userData)
{
- char *url;
+ char *url = xmalloc(strlen(remote->url) + strlen(path) + 1);
struct active_request_slot *slot;
+ struct slot_results results;
struct buffer in_buffer;
struct buffer out_buffer;
char *in_data;
enum XML_Status result;
struct curl_slist *dav_headers = NULL;
struct xml_ctx ctx;
- char path[] = "/objects/XX/";
- static const char hex[] = "0123456789abcdef";
- unsigned int val = parent;
+ struct remote_ls_ctx ls;
+
+ ls.flags = flags;
+ ls.path = strdup(path);
+ ls.dentry_name = NULL;
+ ls.dentry_flags = 0;
+ ls.userData = userData;
+ ls.userFunc = userFunc;
- path[9] = hex[val >> 4];
- path[10] = hex[val & 0xf];
- url = xmalloc(strlen(remote->url) + strlen(path) + 1);
sprintf(url, "%s%s", remote->url, path);
out_buffer.size = strlen(PROPFIND_ALL_REQUEST);
dav_headers = curl_slist_append(dav_headers, "Content-Type: text/xml");
slot = get_active_slot();
+ slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_INFILE, &out_buffer);
curl_easy_setopt(slot->curl, CURLOPT_INFILESIZE, out_buffer.size);
curl_easy_setopt(slot->curl, CURLOPT_READFUNCTION, fread_buffer);
if (start_active_slot(slot)) {
run_active_slot(slot);
- if (slot->curl_result == CURLE_OK) {
- remote_dir_exists[parent] = 1;
+ if (results.curl_result == CURLE_OK) {
ctx.name = xcalloc(10, 1);
ctx.len = 0;
ctx.cdata = NULL;
- ctx.userFunc = handle_remote_object_list_ctx;
+ ctx.userFunc = handle_remote_ls_ctx;
+ ctx.userData = &ls;
XML_SetUserData(parser, &ctx);
XML_SetElementHandler(parser, xml_start_tag,
xml_end_tag);
XML_ErrorString(
XML_GetErrorCode(parser)));
}
- } else {
- remote_dir_exists[parent] = 0;
}
} else {
- fprintf(stderr, "Unable to start request\n");
+ fprintf(stderr, "Unable to start PROPFIND request\n");
}
+ free(ls.path);
free(url);
free(out_data);
free(in_buffer.buffer);
curl_slist_free_all(dav_headers);
}
+static void get_remote_object_list(unsigned char parent)
+{
+ char path[] = "objects/XX/";
+ static const char hex[] = "0123456789abcdef";
+ unsigned int val = parent;
+
+ path[8] = hex[val >> 4];
+ path[9] = hex[val & 0xf];
+ remote_dir_exists[val] = 0;
+ remote_ls(path, (PROCESS_FILES | PROCESS_DIRS),
+ process_ls_object, &val);
+}
+
static int locking_available(void)
{
struct active_request_slot *slot;
+ struct slot_results results;
struct buffer in_buffer;
struct buffer out_buffer;
char *in_data;
dav_headers = curl_slist_append(dav_headers, "Depth: 0");
dav_headers = curl_slist_append(dav_headers, "Content-Type: text/xml");
-
+
slot = get_active_slot();
+ slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_INFILE, &out_buffer);
curl_easy_setopt(slot->curl, CURLOPT_INFILESIZE, out_buffer.size);
curl_easy_setopt(slot->curl, CURLOPT_READFUNCTION, fread_buffer);
if (start_active_slot(slot)) {
run_active_slot(slot);
- if (slot->curl_result == CURLE_OK) {
+ if (results.curl_result == CURLE_OK) {
ctx.name = xcalloc(10, 1);
ctx.len = 0;
ctx.cdata = NULL;
}
}
} else {
- fprintf(stderr, "Unable to start request\n");
+ fprintf(stderr, "Unable to start PROPFIND request\n");
}
free(out_data);
return p;
}
-static void get_delta(struct rev_info *revs, struct remote_lock *lock)
+static int get_delta(struct rev_info *revs, struct remote_lock *lock)
{
struct commit *commit;
struct object_list **p = &objects, *pending;
+ int count = 0;
while ((commit = get_revision(revs)) != NULL) {
p = process_tree(commit->tree, p, NULL, "");
commit->object.flags |= LOCAL;
if (!(commit->object.flags & UNINTERESTING))
- add_request(&commit->object, lock);
+ count += add_send_request(&commit->object, lock);
}
for (pending = revs->pending_objects; pending; pending = pending->next) {
while (objects) {
if (!(objects->item->flags & UNINTERESTING))
- add_request(objects->item, lock);
+ count += add_send_request(objects->item, lock);
objects = objects->next;
}
+
+ return count;
}
static int update_remote(unsigned char *sha1, struct remote_lock *lock)
{
struct active_request_slot *slot;
+ struct slot_results results;
char *out_data;
char *if_header;
struct buffer out_buffer;
out_buffer.buffer = out_data;
slot = get_active_slot();
+ slot->results = &results;
curl_easy_setopt(slot->curl, CURLOPT_INFILE, &out_buffer);
curl_easy_setopt(slot->curl, CURLOPT_INFILESIZE, out_buffer.size);
curl_easy_setopt(slot->curl, CURLOPT_READFUNCTION, fread_buffer);
run_active_slot(slot);
free(out_data);
free(if_header);
- if (slot->curl_result != CURLE_OK) {
+ if (results.curl_result != CURLE_OK) {
fprintf(stderr,
"PUT error: curl result=%d, HTTP code=%ld\n",
- slot->curl_result, slot->http_code);
+ results.curl_result, results.http_code);
/* We should attempt recovery? */
return 0;
}
{
struct ref *ref;
unsigned char remote_sha1[20];
+ struct object *obj;
+ int len = strlen(refname) + 1;
if (fetch_ref(refname, remote_sha1) != 0) {
fprintf(stderr,
return;
}
- int len = strlen(refname) + 1;
+ /*
+ * Fetch a copy of the object if it doesn't exist locally - it
+ * may be required for updating server info later.
+ */
+ if (remote->can_update_info_refs && !has_sha1_file(remote_sha1)) {
+ obj = lookup_unknown_object(remote_sha1);
+ if (obj) {
+ fprintf(stderr, " fetch %s for %s\n",
+ sha1_to_hex(remote_sha1), refname);
+ add_fetch_request(obj);
+ }
+ }
+
ref = xcalloc(1, sizeof(*ref) + len);
memcpy(ref->old_sha1, remote_sha1, 20);
memcpy(ref->name, refname, len);
static void get_dav_remote_heads(void)
{
remote_tail = &remote_refs;
- crawl_remote_refs("refs/");
+ remote_ls("refs/", (PROCESS_FILES | PROCESS_DIRS | RECURSIVE), process_ls_ref, NULL);
}
static int is_zero_sha1(const unsigned char *sha1)
}
}
+static void add_remote_info_ref(struct remote_ls_ctx *ls)
+{
+ struct buffer *buf = (struct buffer *)ls->userData;
+ unsigned char remote_sha1[20];
+ struct object *o;
+ int len;
+ char *ref_info;
+
+ if (fetch_ref(ls->dentry_name, remote_sha1) != 0) {
+ fprintf(stderr,
+ "Unable to fetch ref %s from %s\n",
+ ls->dentry_name, remote->url);
+ aborted = 1;
+ return;
+ }
+
+ o = parse_object(remote_sha1);
+ if (!o) {
+ fprintf(stderr,
+ "Unable to parse object %s for remote ref %s\n",
+ sha1_to_hex(remote_sha1), ls->dentry_name);
+ aborted = 1;
+ return;
+ }
+
+ len = strlen(ls->dentry_name) + 42;
+ ref_info = xcalloc(len + 1, 1);
+ sprintf(ref_info, "%s %s\n",
+ sha1_to_hex(remote_sha1), ls->dentry_name);
+ fwrite_buffer(ref_info, 1, len, buf);
+ free(ref_info);
+
+ if (o->type == tag_type) {
+ o = deref_tag(o, ls->dentry_name, 0);
+ if (o) {
+ len = strlen(ls->dentry_name) + 45;
+ ref_info = xcalloc(len + 1, 1);
+ sprintf(ref_info, "%s %s^{}\n",
+ sha1_to_hex(o->sha1), ls->dentry_name);
+ fwrite_buffer(ref_info, 1, len, buf);
+ free(ref_info);
+ }
+ }
+}
+
+static void update_remote_info_refs(struct remote_lock *lock)
+{
+ struct buffer buffer;
+ struct active_request_slot *slot;
+ struct slot_results results;
+ char *if_header;
+ struct curl_slist *dav_headers = NULL;
+
+ buffer.buffer = xmalloc(4096);
+ memset(buffer.buffer, 0, 4096);
+ buffer.size = 4096;
+ buffer.posn = 0;
+ remote_ls("refs/", (PROCESS_FILES | RECURSIVE),
+ add_remote_info_ref, &buffer);
+ if (!aborted) {
+ if_header = xmalloc(strlen(lock->token) + 25);
+ sprintf(if_header, "If: (<opaquelocktoken:%s>)", lock->token);
+ dav_headers = curl_slist_append(dav_headers, if_header);
+
+ slot = get_active_slot();
+ slot->results = &results;
+ curl_easy_setopt(slot->curl, CURLOPT_INFILE, &buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_INFILESIZE, buffer.posn);
+ curl_easy_setopt(slot->curl, CURLOPT_READFUNCTION, fread_buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_null);
+ curl_easy_setopt(slot->curl, CURLOPT_CUSTOMREQUEST, DAV_PUT);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, dav_headers);
+ curl_easy_setopt(slot->curl, CURLOPT_UPLOAD, 1);
+ curl_easy_setopt(slot->curl, CURLOPT_PUT, 1);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, lock->url);
+
+ buffer.posn = 0;
+
+ if (start_active_slot(slot)) {
+ run_active_slot(slot);
+ if (results.curl_result != CURLE_OK) {
+ fprintf(stderr,
+ "PUT error: curl result=%d, HTTP code=%ld\n",
+ results.curl_result, results.http_code);
+ }
+ }
+ free(if_header);
+ }
+ free(buffer.buffer);
+}
+
+static int remote_exists(const char *path)
+{
+ char *url = xmalloc(strlen(remote->url) + strlen(path) + 1);
+ struct active_request_slot *slot;
+ struct slot_results results;
+
+ sprintf(url, "%s%s", remote->url, path);
+
+ slot = get_active_slot();
+ slot->results = &results;
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+ curl_easy_setopt(slot->curl, CURLOPT_NOBODY, 1);
+
+ if (start_active_slot(slot)) {
+ run_active_slot(slot);
+ if (results.http_code == 404)
+ return 0;
+ else if (results.curl_result == CURLE_OK)
+ return 1;
+ else
+ fprintf(stderr, "HEAD HTTP error %ld\n", results.http_code);
+ } else {
+ fprintf(stderr, "Unable to start HEAD request\n");
+ }
+
+ return -1;
+}
+
+static void fetch_symref(char *path, char **symref, unsigned char *sha1)
+{
+ char *url;
+ struct buffer buffer;
+ struct active_request_slot *slot;
+ struct slot_results results;
+
+ url = xmalloc(strlen(remote->url) + strlen(path) + 1);
+ sprintf(url, "%s%s", remote->url, path);
+
+ buffer.size = 4096;
+ buffer.posn = 0;
+ buffer.buffer = xmalloc(buffer.size);
+
+ slot = get_active_slot();
+ slot->results = &results;
+ curl_easy_setopt(slot->curl, CURLOPT_FILE, &buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_buffer);
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, NULL);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+ if (start_active_slot(slot)) {
+ run_active_slot(slot);
+ if (results.curl_result != CURLE_OK) {
+ die("Couldn't get %s for remote symref\n%s",
+ url, curl_errorstr);
+ }
+ } else {
+ die("Unable to start remote symref request");
+ }
+ free(url);
+
+ if (*symref != NULL)
+ free(*symref);
+ *symref = NULL;
+ memset(sha1, 0, 20);
+
+ if (buffer.posn == 0)
+ return;
+
+ /* If it's a symref, set the refname; otherwise try for a sha1 */
+ if (!strncmp((char *)buffer.buffer, "ref: ", 5)) {
+ *symref = xcalloc(buffer.posn - 5, 1);
+ strncpy(*symref, (char *)buffer.buffer + 5, buffer.posn - 6);
+ } else {
+ get_sha1_hex(buffer.buffer, sha1);
+ }
+
+ free(buffer.buffer);
+}
+
+static int verify_merge_base(unsigned char *head_sha1, unsigned char *branch_sha1)
+{
+ int pipe_fd[2];
+ pid_t merge_base_pid;
+ char line[PATH_MAX + 20];
+ unsigned char merge_sha1[20];
+ int verified = 0;
+
+ if (pipe(pipe_fd) < 0)
+ die("Verify merge base: pipe failed");
+
+ merge_base_pid = fork();
+ if (!merge_base_pid) {
+ static const char *args[] = {
+ "merge-base",
+ "-a",
+ NULL,
+ NULL,
+ NULL
+ };
+ args[2] = strdup(sha1_to_hex(head_sha1));
+ args[3] = sha1_to_hex(branch_sha1);
+
+ dup2(pipe_fd[1], 1);
+ close(pipe_fd[0]);
+ close(pipe_fd[1]);
+ execv_git_cmd(args);
+ die("merge-base setup failed");
+ }
+ if (merge_base_pid < 0)
+ die("merge-base fork failed");
+
+ dup2(pipe_fd[0], 0);
+ close(pipe_fd[0]);
+ close(pipe_fd[1]);
+ while (fgets(line, sizeof(line), stdin) != NULL) {
+ if (get_sha1_hex(line, merge_sha1))
+ die("expected sha1, got garbage:\n %s", line);
+ if (!memcmp(branch_sha1, merge_sha1, 20)) {
+ verified = 1;
+ break;
+ }
+ }
+
+ return verified;
+}
+
+static int delete_remote_branch(char *pattern, int force)
+{
+ struct ref *refs = remote_refs;
+ struct ref *remote_ref = NULL;
+ unsigned char head_sha1[20];
+ char *symref = NULL;
+ int match;
+ int patlen = strlen(pattern);
+ int i;
+ struct active_request_slot *slot;
+ struct slot_results results;
+ char *url;
+
+ /* Find the remote branch(es) matching the specified branch name */
+ for (match = 0; refs; refs = refs->next) {
+ char *name = refs->name;
+ int namelen = strlen(name);
+ if (namelen < patlen ||
+ memcmp(name + namelen - patlen, pattern, patlen))
+ continue;
+ if (namelen != patlen && name[namelen - patlen - 1] != '/')
+ continue;
+ match++;
+ remote_ref = refs;
+ }
+ if (match == 0)
+ return error("No remote branch matches %s", pattern);
+ if (match != 1)
+ return error("More than one remote branch matches %s",
+ pattern);
+
+ /*
+ * Remote HEAD must be a symref (not exactly foolproof; a remote
+ * symlink to a symref will look like a symref)
+ */
+ fetch_symref("HEAD", &symref, head_sha1);
+ if (!symref)
+ return error("Remote HEAD is not a symref");
+
+ /* Remote branch must not be the remote HEAD */
+ for (i=0; symref && i<MAXDEPTH; i++) {
+ if (!strcmp(remote_ref->name, symref))
+ return error("Remote branch %s is the current HEAD",
+ remote_ref->name);
+ fetch_symref(symref, &symref, head_sha1);
+ }
+
+ /* Run extra sanity checks if delete is not forced */
+ if (!force) {
+ /* Remote HEAD must resolve to a known object */
+ if (symref)
+ return error("Remote HEAD symrefs too deep");
+ if (is_zero_sha1(head_sha1))
+ return error("Unable to resolve remote HEAD");
+ if (!has_sha1_file(head_sha1))
+ return error("Remote HEAD resolves to object %s\nwhich does not exist locally, perhaps you need to fetch?", sha1_to_hex(head_sha1));
+
+ /* Remote branch must resolve to a known object */
+ if (is_zero_sha1(remote_ref->old_sha1))
+ return error("Unable to resolve remote branch %s",
+ remote_ref->name);
+ if (!has_sha1_file(remote_ref->old_sha1))
+ return error("Remote branch %s resolves to object %s\nwhich does not exist locally, perhaps you need to fetch?", remote_ref->name, sha1_to_hex(remote_ref->old_sha1));
+
+ /* Remote branch must be an ancestor of remote HEAD */
+ if (!verify_merge_base(head_sha1, remote_ref->old_sha1)) {
+ return error("The branch '%s' is not a strict subset of your current HEAD.\nIf you are sure you want to delete it, run:\n\t'git http-push -D %s %s'", remote_ref->name, remote->url, pattern);
+ }
+ }
+
+ /* Send delete request */
+ fprintf(stderr, "Removing remote branch '%s'\n", remote_ref->name);
+ url = xmalloc(strlen(remote->url) + strlen(remote_ref->name) + 1);
+ sprintf(url, "%s%s", remote->url, remote_ref->name);
+ slot = get_active_slot();
+ slot->results = &results;
+ curl_easy_setopt(slot->curl, CURLOPT_HTTPGET, 1);
+ curl_easy_setopt(slot->curl, CURLOPT_WRITEFUNCTION, fwrite_null);
+ curl_easy_setopt(slot->curl, CURLOPT_URL, url);
+ curl_easy_setopt(slot->curl, CURLOPT_CUSTOMREQUEST, DAV_DELETE);
+ if (start_active_slot(slot)) {
+ run_active_slot(slot);
+ free(url);
+ if (results.curl_result != CURLE_OK)
+ return error("DELETE request failed (%d/%ld)\n",
+ results.curl_result, results.http_code);
+ } else {
+ free(url);
+ return error("Unable to start DELETE request");
+ }
+
+ return 0;
+}
+
int main(int argc, char **argv)
{
struct transfer_request *request;
struct transfer_request *next_request;
int nr_refspec = 0;
char **refspec = NULL;
- struct remote_lock *ref_lock;
+ struct remote_lock *ref_lock = NULL;
+ struct remote_lock *info_ref_lock = NULL;
struct rev_info revs;
+ int delete_branch = 0;
+ int force_delete = 0;
+ int objects_to_send;
int rc = 0;
int i;
+ int new_refs;
+ struct ref *ref;
setup_git_directory();
setup_ident();
- remote = xmalloc(sizeof(*remote));
- remote->url = NULL;
- remote->path_len = 0;
- remote->packs = NULL;
+ remote = xcalloc(sizeof(*remote), 1);
argv++;
for (i = 1; i < argc; i++, argv++) {
push_verbosely = 1;
continue;
}
- usage(http_push_usage);
+ if (!strcmp(arg, "-d")) {
+ delete_branch = 1;
+ continue;
+ }
+ if (!strcmp(arg, "-D")) {
+ delete_branch = 1;
+ force_delete = 1;
+ continue;
+ }
}
if (!remote->url) {
- remote->url = arg;
char *path = strstr(arg, "//");
+ remote->url = arg;
if (path) {
path = index(path+2, '/');
if (path)
if (!remote->url)
usage(http_push_usage);
+ if (delete_branch && nr_refspec != 1)
+ die("You must specify only one branch name when deleting a remote branch");
+
memset(remote_dir_exists, -1, 256);
http_init();
goto cleanup;
}
+ /* Check whether the remote has server info files */
+ remote->can_update_info_refs = 0;
+ remote->has_info_refs = remote_exists("info/refs");
+ remote->has_info_packs = remote_exists("objects/info/packs");
+ if (remote->has_info_refs) {
+ info_ref_lock = lock_remote("info/refs", LOCK_TIME);
+ if (info_ref_lock)
+ remote->can_update_info_refs = 1;
+ }
+ if (remote->has_info_packs)
+ fetch_indices();
+
/* Get a list of all local and remote heads to validate refspecs */
get_local_heads();
fprintf(stderr, "Fetching remote heads...\n");
get_dav_remote_heads();
+ /* Remove a remote branch if -d or -D was specified */
+ if (delete_branch) {
+ if (delete_remote_branch(refspec[0], force_delete) == -1)
+ fprintf(stderr, "Unable to delete remote branch %s\n",
+ refspec[0]);
+ goto cleanup;
+ }
+
/* match them up */
if (!remote_tail)
remote_tail = &remote_refs;
return 0;
}
- int ret = 0;
- int new_refs = 0;
- struct ref *ref;
+ new_refs = 0;
for (ref = remote_refs; ref; ref = ref->next) {
char old_hex[60], *new_hex;
+ const char *commit_argv[4];
+ int commit_argc;
+ char *new_sha1_hex, *old_sha1_hex;
+
if (!ref->peer_ref)
continue;
if (!memcmp(ref->old_sha1, ref->peer_ref->new_sha1, 20)) {
"need to pull first?",
ref->name,
ref->peer_ref->name);
- ret = -2;
+ rc = -2;
continue;
}
}
memcpy(ref->new_sha1, ref->peer_ref->new_sha1, 20);
if (is_zero_sha1(ref->new_sha1)) {
error("cannot happen anymore");
- ret = -3;
+ rc = -3;
continue;
}
new_refs++;
}
/* Set up revision info for this refspec */
- const char *commit_argv[3];
- int commit_argc = 2;
- char *new_sha1_hex = strdup(sha1_to_hex(ref->new_sha1));
- char *old_sha1_hex = NULL;
- commit_argv[1] = new_sha1_hex;
+ commit_argc = 3;
+ new_sha1_hex = strdup(sha1_to_hex(ref->new_sha1));
+ old_sha1_hex = NULL;
+ commit_argv[1] = "--objects";
+ commit_argv[2] = new_sha1_hex;
if (!push_all && !is_zero_sha1(ref->old_sha1)) {
old_sha1_hex = xmalloc(42);
sprintf(old_sha1_hex, "^%s",
sha1_to_hex(ref->old_sha1));
- commit_argv[2] = old_sha1_hex;
+ commit_argv[3] = old_sha1_hex;
commit_argc++;
}
- revs.commits = NULL;
setup_revisions(commit_argc, commit_argv, &revs, NULL);
- revs.tag_objects = 1;
- revs.tree_objects = 1;
- revs.blob_objects = 1;
free(new_sha1_hex);
if (old_sha1_hex) {
free(old_sha1_hex);
pushing = 0;
prepare_revision_walk(&revs);
mark_edges_uninteresting(revs.commits);
- fetch_indices();
- get_delta(&revs, ref_lock);
+ objects_to_send = get_delta(&revs, ref_lock);
finish_all_active_slots();
/* Push missing objects to remote, this would be a
convenient time to pack them first if appropriate. */
pushing = 1;
+ if (objects_to_send)
+ fprintf(stderr, " sending %d objects\n",
+ objects_to_send);
fill_active_slots();
finish_all_active_slots();
if (!rc)
fprintf(stderr, " done\n");
unlock_remote(ref_lock);
+ check_locks();
+ }
+
+ /* Update remote server info if appropriate */
+ if (remote->has_info_refs && new_refs) {
+ if (info_ref_lock && remote->can_update_info_refs) {
+ fprintf(stderr, "Updating remote server info\n");
+ update_remote_info_refs(info_ref_lock);
+ } else {
+ fprintf(stderr, "Unable to update server info\n");
+ }
}
+ if (info_ref_lock)
+ unlock_remote(info_ref_lock);
cleanup:
free(remote);
slot->in_use = 1;
slot->local = NULL;
slot->results = NULL;
+ slot->finished = NULL;
slot->callback_data = NULL;
slot->callback_func = NULL;
curl_easy_setopt(slot->curl, CURLOPT_HTTPHEADER, pragma_header);
fd_set excfds;
int max_fd;
struct timeval select_timeout;
+ int finished = 0;
- while (slot->in_use) {
+ slot->finished = &finished;
+ while (!finished) {
data_received = 0;
step_active_slots();
closedown_active_slot(slot);
curl_easy_getinfo(slot->curl, CURLINFO_HTTP_CODE, &slot->http_code);
+ if (slot->finished != NULL)
+ (*slot->finished) = 1;
+
/* Store slot results so they can be read after the slot is reused */
if (slot->results != NULL) {
slot->results->curl_result = slot->curl_result;
int in_use;
CURLcode curl_result;
long http_code;
+ int *finished;
struct slot_results *results;
void *callback_data;
void (*callback_func)(void *data);
#include <netinet/in.h>
#include <netinet/tcp.h>
#include <arpa/inet.h>
+#include <sys/socket.h>
#include <netdb.h>
typedef struct store_conf {
}
static int
-vasprintf( char **strp, const char *fmt, va_list ap )
+git_vasprintf( char **strp, const char *fmt, va_list ap )
{
int len;
char tmp[1024];
static int
nfvasprintf( char **str, const char *fmt, va_list va )
{
- int ret = vasprintf( str, fmt, va );
+ int ret = git_vasprintf( str, fmt, va );
if (ret < 0)
die( "Fatal: Out of memory\n");
return ret;
_exit( 127 );
close( a[0] );
close( a[1] );
- execl( "/bin/sh", "sh", "-c", srvc->tunnel, 0 );
+ execl( "/bin/sh", "sh", "-c", srvc->tunnel, NULL );
_exit( 127 );
}
close(fd);
return 0;
}
- buf = xmalloc(size);
+ buf = xmalloc(size+1);
if (read(fd, buf, size) != size)
goto err;
close(fd);
+ buf[size++] = '\n';
entry = buf;
for (i = 0; i < size; i++) {
if (buf[i] == '\n') {
struct tree *tree;
prefix = setup_git_directory();
+ git_config(git_default_config);
if (prefix && *prefix)
chomp_prefix = strlen(prefix);
while (1 < argc && argv[1][0] == '-') {
unsigned char rev1key[20], rev2key[20];
setup_git_directory();
+ git_config(git_default_config);
while (1 < argc && argv[1][0] == '-') {
char *arg = argv[1];
int as_is = 0, all = 0, transform_stdin = 0;
setup_git_directory();
+ git_config(git_default_config);
if (argc < 2)
usage(name_rev_usage);
}
+#define MAX_CHAIN 40
+
static void show_pack_info(struct packed_git *p)
{
struct pack_header *hdr;
int nr_objects, i;
+ unsigned int chain_histogram[MAX_CHAIN];
hdr = p->pack_base;
nr_objects = ntohl(hdr->hdr_entries);
+ memset(chain_histogram, 0, sizeof(chain_histogram));
for (i = 0; i < nr_objects; i++) {
unsigned char sha1[20], base_sha1[20];
printf("%s ", sha1_to_hex(sha1));
if (!delta_chain_length)
printf("%-6s %lu %u\n", type, size, e.offset);
- else
+ else {
printf("%-6s %lu %u %u %s\n", type, size, e.offset,
delta_chain_length, sha1_to_hex(base_sha1));
+ if (delta_chain_length < MAX_CHAIN)
+ chain_histogram[delta_chain_length]++;
+ else
+ chain_histogram[0]++;
+ }
}
+ for (i = 0; i < MAX_CHAIN; i++) {
+ if (!chain_histogram[i])
+ continue;
+ printf("chain length %s %d: %d object%s\n",
+ i ? "=" : ">=",
+ i ? i : MAX_CHAIN,
+ chain_histogram[i],
+ 1 < chain_histogram[i] ? "s" : "");
+ }
}
int verify_pack(struct packed_git *p, int verbose)
* be used as the base objectto delta huge
* objects against.
*/
- int based_on_preferred; /* current delta candidate is a preferred
- * one, or delta against a preferred one.
- */
};
/*
{
struct object_entry *cur_entry = cur->entry;
struct object_entry *old_entry = old->entry;
- int old_preferred = (old_entry->preferred_base ||
- old_entry->based_on_preferred);
unsigned long size, oldsize, delta_size, sizediff;
long max_size;
void *delta_buf;
* delete).
*/
max_size = size / 2 - 20;
- if (cur_entry->delta) {
- if (cur_entry->based_on_preferred) {
- if (old_preferred)
- max_size = cur_entry->delta_size-1;
- else
- /* trying with non-preferred one when we
- * already have a delta based on preferred
- * one is pointless.
- */
- return -1;
- }
- else if (!old_preferred)
- max_size = cur_entry->delta_size-1;
- else
- /* otherwise... even if delta with a
- * preferred one produces a bigger result than
- * what we currently have, which is based on a
- * non-preferred one, it is OK.
- */
- ;
- }
+ if (cur_entry->delta)
+ max_size = cur_entry->delta_size-1;
if (sizediff >= max_size)
return -1;
delta_buf = diff_delta(old->data, oldsize,
cur_entry->delta = old_entry;
cur_entry->delta_size = delta_size;
cur_entry->depth = old_entry->depth + 1;
- cur_entry->based_on_preferred = old_preferred;
free(delta_buf);
return 0;
}
if (try_delta(n, m, depth) < 0)
break;
}
+#if 0
+ /* if we made n a delta, and if n is already at max
+ * depth, leaving it in the window is pointless. we
+ * should evict it first.
+ * ... in theory only; somehow this makes things worse.
+ */
+ if (entry->delta && depth <= entry->depth)
+ continue;
+#endif
idx++;
if (idx >= window)
idx = 0;
merge_fn_t fn = NULL;
setup_git_directory();
+ git_config(git_default_config);
newfd = hold_index_file_for_update(&cache_file, get_index_file());
if (newfd < 0)
if (commit->object.flags & (UNINTERESTING | COUNTED))
break;
- if (!revs.paths || (commit->object.flags & TREECHANGE))
+ if (!revs.prune_fn || (commit->object.flags & TREECHANGE))
nr++;
commit->object.flags |= COUNTED;
p = commit->parents;
nr = 0;
p = list;
while (p) {
- if (!revs.paths || (p->item->object.flags & TREECHANGE))
+ if (!revs.prune_fn || (p->item->object.flags & TREECHANGE))
nr++;
p = p->next;
}
for (p = list; p; p = p->next) {
int distance;
- if (revs.paths && !(p->item->object.flags & TREECHANGE))
+ if (revs.prune_fn && !(p->item->object.flags & TREECHANGE))
continue;
distance = count_distance(p);
unsigned char sha1[20];
const char *prefix = setup_git_directory();
+ git_config(git_default_config);
+
for (i = 1; i < argc; i++) {
struct stat st;
char *arg = argv[i];
return 1;
}
-#define TREE_SAME 0
-#define TREE_NEW 1
-#define TREE_DIFFERENT 2
-static int tree_difference = TREE_SAME;
+static int tree_difference = REV_TREE_SAME;
static void file_add_remove(struct diff_options *options,
int addremove, unsigned mode,
const unsigned char *sha1,
const char *base, const char *path)
{
- int diff = TREE_DIFFERENT;
+ int diff = REV_TREE_DIFFERENT;
/*
- * Is it an add of a new file? It means that
- * the old tree didn't have it at all, so we
- * will turn "TREE_SAME" -> "TREE_NEW", but
- * leave any "TREE_DIFFERENT" alone (and if
- * it already was "TREE_NEW", we'll keep it
- * "TREE_NEW" of course).
+ * Is it an add of a new file? It means that the old tree
+ * didn't have it at all, so we will turn "REV_TREE_SAME" ->
+ * "REV_TREE_NEW", but leave any "REV_TREE_DIFFERENT" alone
+ * (and if it already was "REV_TREE_NEW", we'll keep it
+ * "REV_TREE_NEW" of course).
*/
if (addremove == '+') {
diff = tree_difference;
- if (diff != TREE_SAME)
+ if (diff != REV_TREE_SAME)
return;
- diff = TREE_NEW;
+ diff = REV_TREE_NEW;
}
tree_difference = diff;
}
const unsigned char *new_sha1,
const char *base, const char *path)
{
- tree_difference = TREE_DIFFERENT;
+ tree_difference = REV_TREE_DIFFERENT;
}
static struct diff_options diff_opt = {
.change = file_change,
};
-static int compare_tree(struct tree *t1, struct tree *t2)
+int rev_compare_tree(struct tree *t1, struct tree *t2)
{
if (!t1)
- return TREE_NEW;
+ return REV_TREE_NEW;
if (!t2)
- return TREE_DIFFERENT;
- tree_difference = TREE_SAME;
+ return REV_TREE_DIFFERENT;
+ tree_difference = REV_TREE_SAME;
if (diff_tree_sha1(t1->object.sha1, t2->object.sha1, "", &diff_opt) < 0)
- return TREE_DIFFERENT;
+ return REV_TREE_DIFFERENT;
return tree_difference;
}
-static int same_tree_as_empty(struct tree *t1)
+int rev_same_tree_as_empty(struct tree *t1)
{
int retval;
void *tree;
return;
if (!commit->parents) {
- if (!same_tree_as_empty(commit->tree))
+ if (!rev_same_tree_as_empty(commit->tree))
commit->object.flags |= TREECHANGE;
return;
}
struct commit *p = parent->item;
parse_commit(p);
- switch (compare_tree(p->tree, commit->tree)) {
- case TREE_SAME:
+ switch (rev_compare_tree(p->tree, commit->tree)) {
+ case REV_TREE_SAME:
if (p->object.flags & UNINTERESTING) {
/* Even if a merge with an uninteresting
* side branch brought the entire change
commit->parents = parent;
return;
- case TREE_NEW:
- if (revs->remove_empty_trees && same_tree_as_empty(p->tree)) {
- *pp = parent->next;
- continue;
+ case REV_TREE_NEW:
+ if (revs->remove_empty_trees &&
+ rev_same_tree_as_empty(p->tree)) {
+ /* We are adding all the specified
+ * paths from this parent, so the
+ * history beyond this parent is not
+ * interesting. Remove its parents
+ * (they are grandparents for us).
+ * IOW, we pretend this parent is a
+ * "root" commit.
+ */
+ parse_commit(p);
+ p->parents = NULL;
}
/* fallthrough */
- case TREE_DIFFERENT:
+ case REV_TREE_DIFFERENT:
tree_changed = 1;
pp = &parent->next;
continue;
* simplify the commit history and find the parent
* that has no differences in the path set if one exists.
*/
- if (revs->paths)
- try_to_simplify_commit(revs, commit);
+ if (revs->prune_fn)
+ revs->prune_fn(revs, commit);
parent = commit->parents;
while (parent) {
struct commit_list *newlist = NULL;
struct commit_list **p = &newlist;
- if (revs->paths)
- diff_tree_setup_paths(revs->paths);
-
while (list) {
struct commit_list *entry = list;
struct commit *commit = list->item;
for_each_ref(handle_one_ref);
}
+void init_revisions(struct rev_info *revs)
+{
+ memset(revs, 0, sizeof(*revs));
+ revs->lifo = 1;
+ revs->dense = 1;
+ revs->prefix = setup_git_directory();
+ revs->max_age = -1;
+ revs->min_age = -1;
+ revs->max_count = -1;
+
+ revs->prune_fn = NULL;
+ revs->prune_data = NULL;
+
+ revs->topo_setter = topo_sort_default_setter;
+ revs->topo_getter = topo_sort_default_getter;
+}
+
/*
* Parse revision information, filling in the "rev_info" structure,
* and removing the used arguments from the argument list.
const char **unrecognized = argv + 1;
int left = 1;
- memset(revs, 0, sizeof(*revs));
- revs->lifo = 1;
- revs->dense = 1;
- revs->prefix = setup_git_directory();
- revs->max_age = -1;
- revs->min_age = -1;
- revs->max_count = -1;
+ init_revisions(revs);
/* First, search for "--" */
seen_dashdash = 0;
continue;
argv[i] = NULL;
argc = i;
- revs->paths = get_pathspec(revs->prefix, argv + i + 1);
+ revs->prune_data = get_pathspec(revs->prefix, argv + i + 1);
seen_dashdash = 1;
break;
}
if (lstat(argv[j], &st) < 0)
die("'%s': %s", arg, strerror(errno));
}
- revs->paths = get_pathspec(revs->prefix, argv + i);
+ revs->prune_data = get_pathspec(revs->prefix, argv + i);
break;
}
commit = get_commit_reference(revs, arg, sha1, flags ^ local_flags);
commit = get_commit_reference(revs, def, sha1, 0);
add_one_commit(commit, revs);
}
- if (revs->paths)
+
+ if (revs->prune_data) {
+ diff_tree_setup_paths(revs->prune_data);
+ revs->prune_fn = try_to_simplify_commit;
revs->limited = 1;
+ }
+
return left;
}
if (revs->limited)
limit_list(revs);
if (revs->topo_order)
- sort_in_topological_order(&revs->commits, revs->lifo);
+ sort_in_topological_order_fn(&revs->commits, revs->lifo,
+ revs->topo_setter,
+ revs->topo_getter);
}
static int rewrite_one(struct commit **pp)
return NULL;
if (revs->no_merges && commit->parents && commit->parents->next)
goto next;
- if (revs->paths && revs->dense) {
+ if (revs->prune_fn && revs->dense) {
if (!(commit->object.flags & TREECHANGE))
goto next;
rewrite_parents(commit);
#define SHOWN (1u<<3)
#define TMP_MARK (1u<<4) /* for isolated cases; clean after use */
+struct rev_info;
+
+typedef void (prune_fn_t)(struct rev_info *revs, struct commit *commit);
+
struct rev_info {
/* Starting list */
struct commit_list *commits;
/* Basic information */
const char *prefix;
- const char **paths;
+ void *prune_data;
+ prune_fn_t *prune_fn;
/* Traversal flags */
unsigned int dense:1,
int max_count;
unsigned long max_age;
unsigned long min_age;
+
+ topo_sort_set_fn_t topo_setter;
+ topo_sort_get_fn_t topo_getter;
};
+#define REV_TREE_SAME 0
+#define REV_TREE_NEW 1
+#define REV_TREE_DIFFERENT 2
+
/* revision.c */
+extern int rev_same_tree_as_empty(struct tree *t1);
+extern int rev_compare_tree(struct tree *t1, struct tree *t2);
+
+extern void init_revisions(struct rev_info *revs);
extern int setup_revisions(int argc, const char **argv, struct rev_info *revs, const char *def);
extern void prepare_revision_walk(struct rev_info *revs);
extern struct commit *get_revision(struct rev_info *revs);
pid_t pid;
setup_git_directory();
+ git_config(git_default_config);
+
argv++;
for (i = 1; i < argc; i++, argv++) {
char *arg = *argv;
if (left < 20)
die("truncated pack file");
+
+ /* The base entry _must_ be in the same pack */
+ if (!find_pack_entry_one(base_sha1, &base_ent, p))
+ die("failed to find delta-pack base object %s",
+ sha1_to_hex(base_sha1));
+ base = unpack_entry_gently(&base_ent, type, &base_size);
+ if (!base)
+ die("failed to read delta-pack base object %s",
+ sha1_to_hex(base_sha1));
+
data = base_sha1 + 20;
data_size = left - 20;
delta_data = xmalloc(delta_size);
if ((st != Z_STREAM_END) || stream.total_out != delta_size)
die("delta data unpack failed");
- /* The base entry _must_ be in the same pack */
- if (!find_pack_entry_one(base_sha1, &base_ent, p))
- die("failed to find delta-pack base object %s",
- sha1_to_hex(base_sha1));
- base = unpack_entry_gently(&base_ent, type, &base_size);
- if (!base)
- die("failed to read delta-pack base object %s",
- sha1_to_hex(base_sha1));
result = patch_delta(base, base_size,
delta_data, delta_size,
&result_size);
static int get_sha1_basic(const char *str, int len, unsigned char *sha1)
{
- static const char *prefix[] = {
- "",
- "refs",
- "refs/tags",
- "refs/heads",
+ static const char *fmt[] = {
+ "%.*s",
+ "refs/%.*s",
+ "refs/tags/%.*s",
+ "refs/heads/%.*s",
+ "refs/remotes/%.*s",
+ "refs/remotes/%.*s/HEAD",
NULL
};
const char **p;
+ const char *warning = "warning: refname '%.*s' is ambiguous.\n";
+ char *pathname;
+ int already_found = 0;
+ unsigned char *this_result;
+ unsigned char sha1_from_ref[20];
if (len == 40 && !get_sha1_hex(str, sha1))
return 0;
if (ambiguous_path(str, len))
return -1;
- for (p = prefix; *p; p++) {
- char *pathname = git_path("%s/%.*s", *p, len, str);
- if (!read_ref(pathname, sha1))
- return 0;
+ for (p = fmt; *p; p++) {
+ this_result = already_found ? sha1_from_ref : sha1;
+ pathname = git_path(*p, len, str);
+ if (!read_ref(pathname, this_result)) {
+ if (warn_ambiguous_refs) {
+ if (already_found)
+ fprintf(stderr, warning, len, str);
+ already_found++;
+ }
+ else
+ return 0;
+ }
}
+ if (already_found)
+ return 0;
return -1;
}
test_expect_success \
'merge-setup part 4' \
'echo "evil merge." >>file &&
- EDITOR=: git commit -a --amend'
+ EDITOR=: VISUAL=: git commit -a --amend'
test_expect_success \
'Two lines blamed on A, one on B, two on B1, one on B2, one on A U Thor' \
struct tree_desc tree;
setup_git_directory();
+ git_config(git_default_config);
switch (argc) {
case 3:
usage("git-unpack-file <sha1>");
setup_git_directory();
+ git_config(git_default_config);
puts(create_temp_file(sha1));
return 0;
int fd, written;
setup_git_directory();
+ git_config(git_default_config);
if (argc < 3 || argc > 4)
usage(git_update_ref_usage);