mirror of
https://github.com/fosslinux/live-bootstrap.git
synced 2026-03-04 10:25:25 +01:00
Remove the notion of "sys*"
- This idea originates from very early in the project and was, at the
time, a very easy way to categorise things.
- Now, it doesn't really make much sense - it is fairly arbitary, often
occuring when there is a change in kernel, but not from builder-hex0
to fiwix, and sysb is in reality completely unnecessary.
- In short, the sys* stuff is a bit of a mess that makes the project
more difficult to understand.
- This puts everything down into one folder and has a manifest file that
is used to generate the build scripts on the fly rather than using
coded scripts.
- This is created in the "seed" stage.
stage0-posix -- (calls) --> seed -- (generates) --> main steps
Alongside this change there are a variety of other smaller fixups to the
general structure of the live-bootstrap rootfs.
- Creating a rootfs has become much simpler and is defined as code in
go.sh. The new structure, for an about-to-be booted system, is
/
-- /steps (direct copy of steps/)
-- /distfiles (direct copy of distfiles/)
-- all files from seed/*
-- all files from seed/stage0-posix/*
- There is no longer such a thing as /usr/include/musl, this didn't
really make any sense, as musl is the final libc used. Rather, to
separate musl and mes, we have /usr/include/mes, which is much easier
to work with.
- This also makes mes easier to blow away later.
- A few things that weren't properly in packages have been changed;
checksum-transcriber, simple-patch, kexec-fiwix have all been given
fully qualified package names.
- Highly breaking change, scripts now exist in their package directory
but NOT WITH THE packagename.sh. Rather, they use pass1.sh, pass2.sh,
etc. This avoids manual definition of passes.
- Ditto with patches; default directory is patches, but then any patch
series specific to a pass are named patches-passX.
This commit is contained in:
parent
0907cfd073
commit
6ed2e09f3a
546 changed files with 700 additions and 1299 deletions
31
steps/python-3.1.5/files/graminit-regen.patch
Normal file
31
steps/python-3.1.5/files/graminit-regen.patch
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
SPDX-FileCopyrightText: 2022 fosslinux <fosslinux@aussies.space>
|
||||
|
||||
SPDX-License-Identifier: PSF-2.0
|
||||
|
||||
There is a cycle in the build process. graminit.h requires
|
||||
parsetok.c to be built, but graminit.h is included in parsetok.c.
|
||||
Luckily the cycle can be broken by just NOP-ing the logic from
|
||||
graminit.h.
|
||||
|
||||
We apply this patch before regen-ing graminit.h and revert it
|
||||
afterward.
|
||||
|
||||
--- Parser/parsetok.c 2022-10-11 14:11:29.522466304 +1100
|
||||
+++ Parser/parsetok.c 2022-10-11 14:11:42.786627172 +1100
|
||||
@@ -8,7 +8,6 @@
|
||||
#include "parser.h"
|
||||
#include "parsetok.h"
|
||||
#include "errcode.h"
|
||||
-#include "graminit.h"
|
||||
|
||||
|
||||
/* Forward */
|
||||
@@ -240,7 +239,7 @@
|
||||
}
|
||||
}
|
||||
} else if (tok->encoding != NULL) {
|
||||
- node* r = PyNode_New(encoding_decl);
|
||||
+ node* r = NULL;
|
||||
if (!r) {
|
||||
err_ret->error = E_NOMEM;
|
||||
n = NULL;
|
||||
483
steps/python-3.1.5/files/py2.patch
Normal file
483
steps/python-3.1.5/files/py2.patch
Normal file
|
|
@ -0,0 +1,483 @@
|
|||
SPDX-FileCopyrightText: 2022 fosslinux <fosslinux@aussies.space>
|
||||
|
||||
SPDX-License-Identifier: PSF-2.0
|
||||
|
||||
We are building Python 3 using Python 2 as our bootstrap. But
|
||||
makeunicodedata has been converted to Python 3. We need to
|
||||
convert back, particularly print statements, and writing to
|
||||
files.
|
||||
|
||||
We only apply this to the first build.
|
||||
|
||||
--- Tools/unicode/makeunicodedata.py 2012-04-10 09:25:37.000000000 +1000
|
||||
+++ Tools/unicode/makeunicodedata.py 2022-07-13 14:13:37.864821008 +1000
|
||||
@@ -67,7 +67,7 @@
|
||||
|
||||
def maketables(trace=0):
|
||||
|
||||
- print("--- Reading", UNICODE_DATA % "", "...")
|
||||
+ print "--- Reading", UNICODE_DATA % "", "..."
|
||||
|
||||
version = ""
|
||||
unicode = UnicodeData(UNICODE_DATA % version,
|
||||
@@ -76,15 +76,15 @@
|
||||
DERIVED_CORE_PROPERTIES % version,
|
||||
DERIVEDNORMALIZATION_PROPS % version)
|
||||
|
||||
- print(len(list(filter(None, unicode.table))), "characters")
|
||||
+ print len(list(filter(None, unicode.table))), "characters"
|
||||
|
||||
for version in old_versions:
|
||||
- print("--- Reading", UNICODE_DATA % ("-"+version), "...")
|
||||
+ print "--- Reading", UNICODE_DATA % ("-"+version) + "..."
|
||||
old_unicode = UnicodeData(UNICODE_DATA % ("-"+version),
|
||||
COMPOSITION_EXCLUSIONS % ("-"+version),
|
||||
EASTASIAN_WIDTH % ("-"+version),
|
||||
DERIVED_CORE_PROPERTIES % ("-"+version))
|
||||
- print(len(list(filter(None, old_unicode.table))), "characters")
|
||||
+ print len(list(filter(None, old_unicode.table))), "characters"
|
||||
merge_old_version(version, unicode, old_unicode)
|
||||
|
||||
makeunicodename(unicode, trace)
|
||||
@@ -103,7 +103,7 @@
|
||||
|
||||
FILE = "Modules/unicodedata_db.h"
|
||||
|
||||
- print("--- Preparing", FILE, "...")
|
||||
+ print "--- Preparing", FILE, "..."
|
||||
|
||||
# 1) database properties
|
||||
|
||||
@@ -214,92 +214,90 @@
|
||||
l = comp_last[l]
|
||||
comp_data[f*total_last+l] = char
|
||||
|
||||
- print(len(table), "unique properties")
|
||||
- print(len(decomp_prefix), "unique decomposition prefixes")
|
||||
- print(len(decomp_data), "unique decomposition entries:", end=' ')
|
||||
- print(decomp_size, "bytes")
|
||||
- print(total_first, "first characters in NFC")
|
||||
- print(total_last, "last characters in NFC")
|
||||
- print(len(comp_pairs), "NFC pairs")
|
||||
+ print len(table), "unique properties"
|
||||
+ print len(decomp_prefix), "unique decomposition prefixes"
|
||||
+ print len(decomp_data), "unique decomposition entries:",
|
||||
+ print decomp_size, "bytes"
|
||||
+ print total_first, "first characters in NFC"
|
||||
+ print total_last, "last characters in NFC"
|
||||
+ print len(comp_pairs), "NFC pairs"
|
||||
|
||||
- print("--- Writing", FILE, "...")
|
||||
+ print "--- Writing", FILE, "..."
|
||||
|
||||
fp = open(FILE, "w")
|
||||
- print("/* this file was generated by %s %s */" % (SCRIPT, VERSION), file=fp)
|
||||
- print(file=fp)
|
||||
- print('#define UNIDATA_VERSION "%s"' % UNIDATA_VERSION, file=fp)
|
||||
- print("/* a list of unique database records */", file=fp)
|
||||
- print("const _PyUnicode_DatabaseRecord _PyUnicode_Database_Records[] = {", file=fp)
|
||||
+ fp.write("/* this file was generated by %s %s */\n\n" % (SCRIPT, VERSION))
|
||||
+ fp.write('#define UNIDATA_VERSION "%s"\n' % UNIDATA_VERSION)
|
||||
+ fp.write("/* a list of unique database records */\n")
|
||||
+ fp.write("const _PyUnicode_DatabaseRecord _PyUnicode_Database_Records[] = {\n")
|
||||
for item in table:
|
||||
- print(" {%d, %d, %d, %d, %d, %d}," % item, file=fp)
|
||||
- print("};", file=fp)
|
||||
- print(file=fp)
|
||||
-
|
||||
- print("/* Reindexing of NFC first characters. */", file=fp)
|
||||
- print("#define TOTAL_FIRST",total_first, file=fp)
|
||||
- print("#define TOTAL_LAST",total_last, file=fp)
|
||||
- print("struct reindex{int start;short count,index;};", file=fp)
|
||||
- print("static struct reindex nfc_first[] = {", file=fp)
|
||||
+ fp.write(" {%d, %d, %d, %d, %d, %d},\n" % item)
|
||||
+ fp.write("};\n\n")
|
||||
+
|
||||
+ fp.write("/* Reindexing of NFC first characters. */\n")
|
||||
+ fp.write("#define TOTAL_FIRST %d \n" % total_first)
|
||||
+ fp.write("#define TOTAL_LAST %d \n" % total_last)
|
||||
+ fp.write("struct reindex{int start;short count,index;};\n")
|
||||
+ fp.write("static struct reindex nfc_first[] = {\n")
|
||||
for start,end in comp_first_ranges:
|
||||
- print(" { %d, %d, %d}," % (start,end-start,comp_first[start]), file=fp)
|
||||
- print(" {0,0,0}", file=fp)
|
||||
- print("};\n", file=fp)
|
||||
- print("static struct reindex nfc_last[] = {", file=fp)
|
||||
+ fp.write(" { %d, %d, %d},\n" % (start,end-start,comp_first[start]))
|
||||
+ fp.write(" {0,0,0}\n")
|
||||
+ fp.write("};\n")
|
||||
+ fp.write("static struct reindex nfc_last[] = {\n")
|
||||
for start,end in comp_last_ranges:
|
||||
- print(" { %d, %d, %d}," % (start,end-start,comp_last[start]), file=fp)
|
||||
- print(" {0,0,0}", file=fp)
|
||||
- print("};\n", file=fp)
|
||||
+ fp.write(" { %d, %d, %d},\n" % (start,end-start,comp_last[start]))
|
||||
+ fp.write(" {0,0,0}\n")
|
||||
+ fp.write("};\n")
|
||||
|
||||
# FIXME: <fl> the following tables could be made static, and
|
||||
# the support code moved into unicodedatabase.c
|
||||
|
||||
- print("/* string literals */", file=fp)
|
||||
- print("const char *_PyUnicode_CategoryNames[] = {", file=fp)
|
||||
+ fp.write("/* string literals */")
|
||||
+ fp.write("const char *_PyUnicode_CategoryNames[] = {")
|
||||
for name in CATEGORY_NAMES:
|
||||
- print(" \"%s\"," % name, file=fp)
|
||||
- print(" NULL", file=fp)
|
||||
- print("};", file=fp)
|
||||
+ fp.write(" \"%s\",\n" % name)
|
||||
+ fp.write(" NULL\n")
|
||||
+ fp.write("};\n")
|
||||
|
||||
- print("const char *_PyUnicode_BidirectionalNames[] = {", file=fp)
|
||||
+ fp.write("const char *_PyUnicode_BidirectionalNames[] = {\n")
|
||||
for name in BIDIRECTIONAL_NAMES:
|
||||
- print(" \"%s\"," % name, file=fp)
|
||||
- print(" NULL", file=fp)
|
||||
- print("};", file=fp)
|
||||
+ fp.write(" \"%s\",\n" % name)
|
||||
+ fp.write(" NULL\n")
|
||||
+ fp.write("};\n")
|
||||
|
||||
- print("const char *_PyUnicode_EastAsianWidthNames[] = {", file=fp)
|
||||
+ fp.write("const char *_PyUnicode_EastAsianWidthNames[] = {\n")
|
||||
for name in EASTASIANWIDTH_NAMES:
|
||||
- print(" \"%s\"," % name, file=fp)
|
||||
- print(" NULL", file=fp)
|
||||
- print("};", file=fp)
|
||||
+ fp.write(" \"%s\",\n" % name)
|
||||
+ fp.write(" NULL\n")
|
||||
+ fp.write("};\n")
|
||||
|
||||
- print("static const char *decomp_prefix[] = {", file=fp)
|
||||
+ fp.write("static const char *decomp_prefix[] = {\n")
|
||||
for name in decomp_prefix:
|
||||
- print(" \"%s\"," % name, file=fp)
|
||||
- print(" NULL", file=fp)
|
||||
- print("};", file=fp)
|
||||
+ fp.write(" \"%s\",\n" % name)
|
||||
+ fp.write(" NULL\n")
|
||||
+ fp.write("};\n")
|
||||
|
||||
# split record index table
|
||||
index1, index2, shift = splitbins(index, trace)
|
||||
|
||||
- print("/* index tables for the database records */", file=fp)
|
||||
- print("#define SHIFT", shift, file=fp)
|
||||
+ fp.write("/* index tables for the database records */\n")
|
||||
+ fp.write("#define SHIFT %d\n" % shift)
|
||||
Array("index1", index1).dump(fp, trace)
|
||||
Array("index2", index2).dump(fp, trace)
|
||||
|
||||
# split decomposition index table
|
||||
index1, index2, shift = splitbins(decomp_index, trace)
|
||||
|
||||
- print("/* decomposition data */", file=fp)
|
||||
+ fp.write("/* decomposition data */\n")
|
||||
Array("decomp_data", decomp_data).dump(fp, trace)
|
||||
|
||||
- print("/* index tables for the decomposition data */", file=fp)
|
||||
- print("#define DECOMP_SHIFT", shift, file=fp)
|
||||
+ fp.write("/* index tables for the decomposition data */\n")
|
||||
+ fp.write("#define DECOMP_SHIFT %d\n" % shift)
|
||||
Array("decomp_index1", index1).dump(fp, trace)
|
||||
Array("decomp_index2", index2).dump(fp, trace)
|
||||
|
||||
index, index2, shift = splitbins(comp_data, trace)
|
||||
- print("/* NFC pairs */", file=fp)
|
||||
- print("#define COMP_SHIFT", shift, file=fp)
|
||||
+ fp.write("/* NFC pairs */\n")
|
||||
+ fp.write("#define COMP_SHIFT %d\n" % shift)
|
||||
Array("comp_index", index).dump(fp, trace)
|
||||
Array("comp_data", index2).dump(fp, trace)
|
||||
|
||||
@@ -316,30 +314,30 @@
|
||||
index[i] = cache[record] = len(records)
|
||||
records.append(record)
|
||||
index1, index2, shift = splitbins(index, trace)
|
||||
- print("static const change_record change_records_%s[] = {" % cversion, file=fp)
|
||||
+ fp.write("static const change_record change_records_%s[] = {\n" % cversion)
|
||||
for record in records:
|
||||
- print("\t{ %s }," % ", ".join(map(str,record)), file=fp)
|
||||
- print("};", file=fp)
|
||||
- Array("changes_%s_index" % cversion, index1).dump(fp, trace)
|
||||
- Array("changes_%s_data" % cversion, index2).dump(fp, trace)
|
||||
- print("static const change_record* get_change_%s(Py_UCS4 n)" % cversion, file=fp)
|
||||
- print("{", file=fp)
|
||||
- print("\tint index;", file=fp)
|
||||
- print("\tif (n >= 0x110000) index = 0;", file=fp)
|
||||
- print("\telse {", file=fp)
|
||||
- print("\t\tindex = changes_%s_index[n>>%d];" % (cversion, shift), file=fp)
|
||||
- print("\t\tindex = changes_%s_data[(index<<%d)+(n & %d)];" % \
|
||||
- (cversion, shift, ((1<<shift)-1)), file=fp)
|
||||
- print("\t}", file=fp)
|
||||
- print("\treturn change_records_%s+index;" % cversion, file=fp)
|
||||
- print("}\n", file=fp)
|
||||
- print("static Py_UCS4 normalization_%s(Py_UCS4 n)" % cversion, file=fp)
|
||||
- print("{", file=fp)
|
||||
- print("\tswitch(n) {", file=fp)
|
||||
+ fp.write("\t{ %s },\n" % ", ".join(map(str,record)))
|
||||
+ fp.write("};\n")
|
||||
+ Array("changes_%s_index\n" % cversion, index1).dump(fp, trace)
|
||||
+ Array("changes_%s_data\n" % cversion, index2).dump(fp, trace)
|
||||
+ fp.write("static const change_record* get_change_%s(Py_UCS4 n)\n" % cversion)
|
||||
+ fp.write("{\n")
|
||||
+ fp.write("\tint index;\n")
|
||||
+ fp.write("\tif (n >= 0x110000) index = 0;\n")
|
||||
+ fp.write("\telse {\n")
|
||||
+ fp.write("\t\tindex = changes_%s_index[n>>%d];\n" % (cversion, shift))
|
||||
+ fp.write("\t\tindex = changes_%s_data[(index<<%d)+(n & %d)];\n" % \
|
||||
+ (cversion, shift, ((1<<shift)-1)))
|
||||
+ fp.write("\t}\n")
|
||||
+ fp.write("\treturn change_records_%s+index;\n" % cversion)
|
||||
+ fp.write("}\n\n")
|
||||
+ fp.write("static Py_UCS4 normalization_%s(Py_UCS4 n)\n" % cversion)
|
||||
+ fp.write("{\n")
|
||||
+ fp.write("\tswitch(n) {\n")
|
||||
for k, v in normalization:
|
||||
- print("\tcase %s: return 0x%s;" % (hex(k), v), file=fp)
|
||||
- print("\tdefault: return 0;", file=fp)
|
||||
- print("\t}\n}\n", file=fp)
|
||||
+ fp.write("\tcase %s: return 0x%s;\n" % (hex(k), v))
|
||||
+ fp.write("\tdefault: return 0;\n")
|
||||
+ fp.write("\t}\n}\n\n")
|
||||
|
||||
fp.close()
|
||||
|
||||
@@ -350,7 +348,7 @@
|
||||
|
||||
FILE = "Objects/unicodetype_db.h"
|
||||
|
||||
- print("--- Preparing", FILE, "...")
|
||||
+ print "--- Preparing", FILE, "..."
|
||||
|
||||
# extract unicode types
|
||||
dummy = (0, 0, 0, 0, 0, 0)
|
||||
@@ -433,25 +431,25 @@
|
||||
table.append(item)
|
||||
index[char] = i
|
||||
|
||||
- print(len(table), "unique character type entries")
|
||||
+ print len(table), "unique character type entries"
|
||||
|
||||
- print("--- Writing", FILE, "...")
|
||||
+ print "--- Writing", FILE, "..."
|
||||
|
||||
fp = open(FILE, "w")
|
||||
- print("/* this file was generated by %s %s */" % (SCRIPT, VERSION), file=fp)
|
||||
- print(file=fp)
|
||||
- print("/* a list of unique character type descriptors */", file=fp)
|
||||
- print("const _PyUnicode_TypeRecord _PyUnicode_TypeRecords[] = {", file=fp)
|
||||
+ fp.write("/* this file was generated by %s %s */\n" % (SCRIPT, VERSION))
|
||||
+ fp.write("\n")
|
||||
+ fp.write("/* a list of unique character type descriptors */\n")
|
||||
+ fp.write("const _PyUnicode_TypeRecord _PyUnicode_TypeRecords[] = {\n")
|
||||
for item in table:
|
||||
- print(" {%d, %d, %d, %d, %d, %d}," % item, file=fp)
|
||||
- print("};", file=fp)
|
||||
- print(file=fp)
|
||||
+ fp.write(" {%d, %d, %d, %d, %d, %d},\n" % item)
|
||||
+ fp.write("};\n")
|
||||
+ fp.write("\n")
|
||||
|
||||
# split decomposition index table
|
||||
index1, index2, shift = splitbins(index, trace)
|
||||
|
||||
- print("/* type indexes */", file=fp)
|
||||
- print("#define SHIFT", shift, file=fp)
|
||||
+ fp.write("/* type indexes */\n")
|
||||
+ fp.write("#define SHIFT %d\n" % shift)
|
||||
Array("index1", index1).dump(fp, trace)
|
||||
Array("index2", index2).dump(fp, trace)
|
||||
|
||||
@@ -464,7 +462,7 @@
|
||||
|
||||
FILE = "Modules/unicodename_db.h"
|
||||
|
||||
- print("--- Preparing", FILE, "...")
|
||||
+ print "--- Preparing", FILE, "..."
|
||||
|
||||
# collect names
|
||||
names = [None] * len(unicode.chars)
|
||||
@@ -476,7 +474,7 @@
|
||||
if name and name[0] != "<":
|
||||
names[char] = name + chr(0)
|
||||
|
||||
- print(len(list(n for n in names if n is not None)), "distinct names")
|
||||
+ print len(list(n for n in names if n is not None)), "distinct names"
|
||||
|
||||
# collect unique words from names (note that we differ between
|
||||
# words inside a sentence, and words ending a sentence. the
|
||||
@@ -497,7 +495,7 @@
|
||||
else:
|
||||
words[w] = [len(words)]
|
||||
|
||||
- print(n, "words in text;", b, "bytes")
|
||||
+ print n, "words in text;", b, "bytes"
|
||||
|
||||
wordlist = list(words.items())
|
||||
|
||||
@@ -511,19 +509,19 @@
|
||||
escapes = 0
|
||||
while escapes * 256 < len(wordlist):
|
||||
escapes = escapes + 1
|
||||
- print(escapes, "escapes")
|
||||
+ print escapes, "escapes"
|
||||
|
||||
short = 256 - escapes
|
||||
|
||||
assert short > 0
|
||||
|
||||
- print(short, "short indexes in lexicon")
|
||||
+ print short, "short indexes in lexicon"
|
||||
|
||||
# statistics
|
||||
n = 0
|
||||
for i in range(short):
|
||||
n = n + len(wordlist[i][1])
|
||||
- print(n, "short indexes in phrasebook")
|
||||
+ print n, "short indexes in phrasebook"
|
||||
|
||||
# pick the most commonly used words, and sort the rest on falling
|
||||
# length (to maximize overlap)
|
||||
@@ -592,29 +590,29 @@
|
||||
|
||||
codehash = Hash("code", data, 47)
|
||||
|
||||
- print("--- Writing", FILE, "...")
|
||||
+ print "--- Writing", FILE, "..."
|
||||
|
||||
fp = open(FILE, "w")
|
||||
- print("/* this file was generated by %s %s */" % (SCRIPT, VERSION), file=fp)
|
||||
- print(file=fp)
|
||||
- print("#define NAME_MAXLEN", 256, file=fp)
|
||||
- print(file=fp)
|
||||
- print("/* lexicon */", file=fp)
|
||||
+ fp.write("/* this file was generated by %s %s */\n" % (SCRIPT, VERSION))
|
||||
+ fp.write("\n")
|
||||
+ fp.write("#define NAME_MAXLEN 256")
|
||||
+ fp.write("\n")
|
||||
+ fp.write("/* lexicon */\n")
|
||||
Array("lexicon", lexicon).dump(fp, trace)
|
||||
Array("lexicon_offset", lexicon_offset).dump(fp, trace)
|
||||
|
||||
# split decomposition index table
|
||||
offset1, offset2, shift = splitbins(phrasebook_offset, trace)
|
||||
|
||||
- print("/* code->name phrasebook */", file=fp)
|
||||
- print("#define phrasebook_shift", shift, file=fp)
|
||||
- print("#define phrasebook_short", short, file=fp)
|
||||
+ fp.write("/* code->name phrasebook */\n")
|
||||
+ fp.write("#define phrasebook_shift %d\n" % shift)
|
||||
+ fp.write("#define phrasebook_short %d\n" % short)
|
||||
|
||||
Array("phrasebook", phrasebook).dump(fp, trace)
|
||||
Array("phrasebook_offset1", offset1).dump(fp, trace)
|
||||
Array("phrasebook_offset2", offset2).dump(fp, trace)
|
||||
|
||||
- print("/* name->code dictionary */", file=fp)
|
||||
+ fp.write("/* name->code dictionary */\n")
|
||||
codehash.dump(fp, trace)
|
||||
|
||||
fp.close()
|
||||
@@ -868,7 +866,7 @@
|
||||
else:
|
||||
raise AssertionError("ran out of polynomials")
|
||||
|
||||
- print(size, "slots in hash table")
|
||||
+ print size, "slots in hash table"
|
||||
|
||||
table = [None] * size
|
||||
|
||||
@@ -900,7 +898,7 @@
|
||||
if incr > mask:
|
||||
incr = incr ^ poly
|
||||
|
||||
- print(n, "collisions")
|
||||
+ print n, "collisions"
|
||||
self.collisions = n
|
||||
|
||||
for i in range(len(table)):
|
||||
@@ -931,8 +929,6 @@
|
||||
def dump(self, file, trace=0):
|
||||
# write data to file, as a C array
|
||||
size = getsize(self.data)
|
||||
- if trace:
|
||||
- print(self.name+":", size*len(self.data), "bytes", file=sys.stderr)
|
||||
file.write("static ")
|
||||
if size == 1:
|
||||
file.write("unsigned char")
|
||||
@@ -980,12 +976,6 @@
|
||||
"""
|
||||
|
||||
import sys
|
||||
- if trace:
|
||||
- def dump(t1, t2, shift, bytes):
|
||||
- print("%d+%d bins at shift %d; %d bytes" % (
|
||||
- len(t1), len(t2), shift, bytes), file=sys.stderr)
|
||||
- print("Size of original table:", len(t)*getsize(t), \
|
||||
- "bytes", file=sys.stderr)
|
||||
n = len(t)-1 # last valid index
|
||||
maxshift = 0 # the most we can shift n and still have something left
|
||||
if n > 0:
|
||||
@@ -993,7 +983,7 @@
|
||||
n >>= 1
|
||||
maxshift += 1
|
||||
del n
|
||||
- bytes = sys.maxsize # smallest total size so far
|
||||
+ bytes_size = 2**31 - 1 # smallest total size so far
|
||||
t = tuple(t) # so slices can be dict keys
|
||||
for shift in range(maxshift + 1):
|
||||
t1 = []
|
||||
@@ -1010,15 +1000,10 @@
|
||||
t1.append(index >> shift)
|
||||
# determine memory size
|
||||
b = len(t1)*getsize(t1) + len(t2)*getsize(t2)
|
||||
- if trace > 1:
|
||||
- dump(t1, t2, shift, b)
|
||||
- if b < bytes:
|
||||
+ if b < bytes_size:
|
||||
best = t1, t2, shift
|
||||
- bytes = b
|
||||
+ bytes_size = b
|
||||
t1, t2, shift = best
|
||||
- if trace:
|
||||
- print("Best:", end=' ', file=sys.stderr)
|
||||
- dump(t1, t2, shift, bytes)
|
||||
if __debug__:
|
||||
# exhaustively verify that the decomposition is correct
|
||||
mask = ~((~0) << shift) # i.e., low-bit mask of shift bits
|
||||
--- Lib/token.py 2012-04-10 09:25:36.000000000 +1000
|
||||
+++ Lib/token.py 2022-07-13 14:13:37.893821468 +1000
|
||||
@@ -93,11 +93,7 @@
|
||||
outFileName = "Lib/token.py"
|
||||
if len(args) > 1:
|
||||
outFileName = args[1]
|
||||
- try:
|
||||
- fp = open(inFileName)
|
||||
- except IOError as err:
|
||||
- sys.stdout.write("I/O error: %s\n" % str(err))
|
||||
- sys.exit(1)
|
||||
+ fp = open(inFileName)
|
||||
lines = fp.read().split("\n")
|
||||
fp.close()
|
||||
prog = re.compile(
|
||||
@@ -114,7 +110,7 @@
|
||||
# load the output skeleton from the target:
|
||||
try:
|
||||
fp = open(outFileName)
|
||||
- except IOError as err:
|
||||
+ except IOError:
|
||||
sys.stderr.write("I/O error: %s\n" % str(err))
|
||||
sys.exit(2)
|
||||
format = fp.read().split("\n")
|
||||
@@ -131,7 +127,7 @@
|
||||
format[start:end] = lines
|
||||
try:
|
||||
fp = open(outFileName, 'w')
|
||||
- except IOError as err:
|
||||
+ except IOError:
|
||||
sys.stderr.write("I/O error: %s\n" % str(err))
|
||||
sys.exit(4)
|
||||
fp.write("\n".join(format))
|
||||
84
steps/python-3.1.5/pass1.sh
Executable file
84
steps/python-3.1.5/pass1.sh
Executable file
|
|
@ -0,0 +1,84 @@
|
|||
# SPDX-FileCopyrightText: 2022 fosslinux <fosslinux@aussies.space>
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
src_prepare() {
|
||||
default
|
||||
|
||||
patch -Np0 -i py2.patch
|
||||
|
||||
# Delete generated files
|
||||
rm Include/Python-ast.h Python/Python-ast.c
|
||||
rm Lib/stringprep.py
|
||||
rm Lib/pydoc_data/topics.py
|
||||
rm Misc/Vim/python.vim
|
||||
rm -r Modules/_ctypes/libffi
|
||||
mv Lib/plat-generic .
|
||||
rm -r Lib/plat-*
|
||||
mv plat-generic Lib/
|
||||
grep generated -r . -l | grep encodings | xargs rm
|
||||
|
||||
# Regenerate unicode
|
||||
rm Modules/unicodedata_db.h Modules/unicodename_db.h Objects/unicodetype_db.h
|
||||
for f in UnicodeData CompositionExclusions EastAsianWidth DerivedCoreProperties DerivedNormalizationProps; do
|
||||
mv "../${f}-3.2.0.txt" .
|
||||
mv "../${f}-5.1.0.txt" "${f}.txt"
|
||||
done
|
||||
python Tools/unicode/makeunicodedata.py
|
||||
|
||||
# Regenerate sre_constants.h
|
||||
rm Modules/sre_constants.h
|
||||
python Lib/sre_constants.py
|
||||
|
||||
# Regenerate autoconf
|
||||
autoreconf-2.71 -fi
|
||||
}
|
||||
|
||||
src_configure() {
|
||||
MACHDEP=linux ac_sys_system=Linux \
|
||||
CFLAGS="-U__DATE__ -U__TIME__" \
|
||||
LDFLAGS="-L${LIBDIR}" \
|
||||
./configure \
|
||||
--prefix="${PREFIX}" \
|
||||
--libdir="${LIBDIR}" \
|
||||
--build=i386-unknown-linux-musl \
|
||||
--host=i386-unknown-linux-musl \
|
||||
--with-pydebug \
|
||||
--with-system-ffi \
|
||||
--enable-ipv6
|
||||
}
|
||||
|
||||
src_compile() {
|
||||
# Temporarily break include cycle
|
||||
patch -Np0 -i graminit-regen.patch
|
||||
# Build pgen
|
||||
make "${MAKEJOBS}" Parser/pgen
|
||||
# Regen graminit.c and graminit.h
|
||||
make "${MAKEJOBS}" Include/graminit.h
|
||||
|
||||
# Regenerate some Python scripts using the other regenerated files
|
||||
# Must move them out to avoid using Lib/ module files which are
|
||||
# incompatible with running version of Python
|
||||
cp Lib/{symbol,keyword,token}.py .
|
||||
python symbol.py
|
||||
python keyword.py
|
||||
python token.py
|
||||
|
||||
# Undo change
|
||||
patch -Np0 -R -i graminit-regen.patch
|
||||
# Now build the main program
|
||||
make "${MAKEJOBS}" CFLAGS="-U__DATE__ -U__TIME__"
|
||||
}
|
||||
|
||||
src_install() {
|
||||
default
|
||||
ln --symbolic --relative "${DESTDIR}${LIBDIR}/python3.1/lib-dynload" "${DESTDIR}${PREFIX}/lib/python3.1/lib-dynload"
|
||||
ln --symbolic --relative "${DESTDIR}${PREFIX}/bin/python3.1" "${DESTDIR}${PREFIX}/bin/python"
|
||||
|
||||
# Remove non-reproducible .pyc/o files
|
||||
find "${DESTDIR}" -name "*.pyc" -delete
|
||||
find "${DESTDIR}" -name "*.pyo" -delete
|
||||
|
||||
# This file is not reproducible and I don't care to fix it
|
||||
rm "${DESTDIR}/${PREFIX}/lib/python3.1/lib2to3/"{Pattern,}"Grammar3.1.5.final.0.pickle"
|
||||
}
|
||||
89
steps/python-3.1.5/pass2.sh
Executable file
89
steps/python-3.1.5/pass2.sh
Executable file
|
|
@ -0,0 +1,89 @@
|
|||
# SPDX-FileCopyrightText: 2022 fosslinux <fosslinux@aussies.space>
|
||||
#
|
||||
# SPDX-License-Identifier: GPL-3.0-or-later
|
||||
|
||||
src_prepare() {
|
||||
default
|
||||
|
||||
# Delete generated files
|
||||
rm Include/Python-ast.h Python/Python-ast.c
|
||||
rm Lib/stringprep.py
|
||||
rm Lib/pydoc_data/topics.py
|
||||
rm Misc/Vim/python.vim
|
||||
rm -r Modules/_ctypes/libffi
|
||||
mv Lib/plat-generic .
|
||||
rm -r Lib/plat-*
|
||||
mv plat-generic Lib/
|
||||
grep generated -r . -l | grep encodings | xargs rm
|
||||
|
||||
# Regenerate encodings
|
||||
mkdir Tools/unicode/in Tools/unicode/out
|
||||
mv ../CP437.TXT Tools/unicode/in/
|
||||
pushd Tools/unicode
|
||||
python gencodec.py in/ ../../Lib/encodings/
|
||||
popd
|
||||
|
||||
# Regenerate unicode
|
||||
rm Modules/unicodedata_db.h Modules/unicodename_db.h Objects/unicodetype_db.h
|
||||
for f in UnicodeData CompositionExclusions EastAsianWidth DerivedCoreProperties DerivedNormalizationProps; do
|
||||
mv "../${f}-3.2.0.txt" .
|
||||
mv "../${f}-5.1.0.txt" "${f}.txt"
|
||||
done
|
||||
python Tools/unicode/makeunicodedata.py
|
||||
|
||||
# Regenerate sre_constants.h
|
||||
rm Modules/sre_constants.h
|
||||
python2.5 Lib/sre_constants.py
|
||||
|
||||
# Regenerate autoconf
|
||||
autoreconf-2.71 -fi
|
||||
}
|
||||
|
||||
src_configure() {
|
||||
MACHDEP=linux ac_sys_system=Linux \
|
||||
CFLAGS="-U__DATE__ -U__TIME__" \
|
||||
LDFLAGS="-L${LIBDIR}" \
|
||||
./configure \
|
||||
--prefix="${PREFIX}" \
|
||||
--libdir="${LIBDIR}" \
|
||||
--build=i386-unknown-linux-musl \
|
||||
--host=i386-unknown-linux-musl \
|
||||
--with-pydebug \
|
||||
--with-system-ffi \
|
||||
--enable-ipv6
|
||||
}
|
||||
|
||||
src_compile() {
|
||||
# Temporarily break include cycle
|
||||
patch -Np0 -i graminit-regen.patch
|
||||
# Build pgen
|
||||
make "${MAKEJOBS}" Parser/pgen
|
||||
# Regen graminit.c and graminit.h
|
||||
make "${MAKEJOBS}" Include/graminit.h
|
||||
|
||||
# Regenerate some Python scripts using the other regenerated files
|
||||
# Must move them out to avoid using Lib/ module files which are
|
||||
# incompatible with running version of Python
|
||||
cp Lib/{symbol,keyword,token}.py .
|
||||
python symbol.py
|
||||
python keyword.py
|
||||
python token.py
|
||||
|
||||
# Undo change
|
||||
patch -Np0 -R -i graminit-regen.patch
|
||||
# Now build the main program
|
||||
make "${MAKEJOBS}" CFLAGS="-U__DATE__ -U__TIME__"
|
||||
}
|
||||
|
||||
src_install() {
|
||||
default
|
||||
ln --symbolic --relative "${DESTDIR}${LIBDIR}/python3.1/lib-dynload" "${DESTDIR}${PREFIX}/lib/python3.1/lib-dynload"
|
||||
ln --symbolic --relative "${DESTDIR}${PREFIX}/bin/python3.1" "${DESTDIR}${PREFIX}/bin/python"
|
||||
|
||||
# Remove non-reproducible .pyc/o files
|
||||
find "${DESTDIR}" -name "*.pyc" -delete
|
||||
find "${DESTDIR}" -name "*.pyo" -delete
|
||||
|
||||
# This file is not reproducible and I don't care to fix it
|
||||
rm "${DESTDIR}/${PREFIX}/lib/python3.1/lib2to3/"{Pattern,}"Grammar3.1.5.final.0.pickle"
|
||||
}
|
||||
19
steps/python-3.1.5/patches/install-perms.patch
Normal file
19
steps/python-3.1.5/patches/install-perms.patch
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
SPDX-FileCopyrightText: 2023 fosslinux <fosslinux@aussies.space>
|
||||
|
||||
SPDX-License-Identifier: PSF-2.0
|
||||
|
||||
Install libraries with 755 instead of 555 so we can strip them. (This
|
||||
is what is in modern versions of python).
|
||||
|
||||
--- Makefile.pre.in 2023-03-15 21:49:08.274186777 +1100
|
||||
+++ Makefile.pre.in 2023-03-15 21:50:02.466143662 +1100
|
||||
@@ -54,8 +54,7 @@
|
||||
INSTALL_DATA= @INSTALL_DATA@
|
||||
# Shared libraries must be installed with executable mode on some systems;
|
||||
# rather than figuring out exactly which, we always give them executable mode.
|
||||
-# Also, making them read-only seems to be a good idea...
|
||||
-INSTALL_SHARED= ${INSTALL} -m 555
|
||||
+INSTALL_SHARED= ${INSTALL} -m 755
|
||||
|
||||
MAKESETUP= $(srcdir)/Modules/makesetup
|
||||
|
||||
33
steps/python-3.1.5/patches/openssl.patch
Normal file
33
steps/python-3.1.5/patches/openssl.patch
Normal file
|
|
@ -0,0 +1,33 @@
|
|||
SPDX-FileCopyrightText: 2022 fosslinux <fosslinux@aussies.space>
|
||||
|
||||
SPDX-License-Identifier: PSF-2.0
|
||||
|
||||
openssl is too new for this version of Python. Tell Python build system
|
||||
we don't have openssl.
|
||||
|
||||
--- setup.py 2022-12-19 10:51:49.749157041 +1100
|
||||
+++ setup.py 2022-12-19 10:52:37.223748681 +1100
|
||||
@@ -712,7 +712,7 @@
|
||||
|
||||
#print('openssl_ver = 0x%08x' % openssl_ver)
|
||||
|
||||
- if ssl_incs is not None and ssl_libs is not None:
|
||||
+ if False:
|
||||
if openssl_ver >= 0x00907000:
|
||||
# The _hashlib module wraps optimized implementations
|
||||
# of hash functions from the OpenSSL library.
|
||||
@@ -727,12 +727,12 @@
|
||||
else:
|
||||
missing.append('_hashlib')
|
||||
|
||||
- if openssl_ver < 0x00908000:
|
||||
+ if True:
|
||||
# OpenSSL doesn't do these until 0.9.8 so we'll bring our own hash
|
||||
exts.append( Extension('_sha256', ['sha256module.c']) )
|
||||
exts.append( Extension('_sha512', ['sha512module.c']) )
|
||||
|
||||
- if openssl_ver < 0x00907000:
|
||||
+ if True:
|
||||
# no openssl at all, use our own md5 and sha1
|
||||
exts.append( Extension('_md5', ['md5module.c']) )
|
||||
exts.append( Extension('_sha1', ['sha1module.c']) )
|
||||
33
steps/python-3.1.5/patches/posixmodule.patch
Normal file
33
steps/python-3.1.5/patches/posixmodule.patch
Normal file
|
|
@ -0,0 +1,33 @@
|
|||
SPDX-FileCopyrightText: 2022 fosslinux <fosslinux@aussies.space>
|
||||
|
||||
SPDX-License-Identifier: PSF-2.0
|
||||
|
||||
musl (correctly) implements the POSIX posix_close function, however
|
||||
this was added after Python 3.1.5 was released.
|
||||
|
||||
--- Modules/posixmodule.c 2022-10-15 10:20:33.311399832 +1100
|
||||
+++ Modules/posixmodule.c 2022-10-15 10:21:03.522921510 +1100
|
||||
@@ -4993,12 +4993,12 @@
|
||||
}
|
||||
|
||||
|
||||
-PyDoc_STRVAR(posix_close__doc__,
|
||||
+PyDoc_STRVAR(py_posix_close__doc__,
|
||||
"close(fd)\n\n\
|
||||
Close a file descriptor (for low level IO).");
|
||||
|
||||
static PyObject *
|
||||
-posix_close(PyObject *self, PyObject *args)
|
||||
+py_posix_close(PyObject *self, PyObject *args)
|
||||
{
|
||||
int fd, res;
|
||||
if (!PyArg_ParseTuple(args, "i:close", &fd))
|
||||
@@ -7198,7 +7198,7 @@
|
||||
{"tcsetpgrp", posix_tcsetpgrp, METH_VARARGS, posix_tcsetpgrp__doc__},
|
||||
#endif /* HAVE_TCSETPGRP */
|
||||
{"open", posix_open, METH_VARARGS, posix_open__doc__},
|
||||
- {"close", posix_close, METH_VARARGS, posix_close__doc__},
|
||||
+ {"close", py_posix_close, METH_VARARGS, py_posix_close__doc__},
|
||||
{"closerange", posix_closerange, METH_VARARGS, posix_closerange__doc__},
|
||||
{"device_encoding", device_encoding, METH_VARARGS, device_encoding__doc__},
|
||||
{"dup", posix_dup, METH_VARARGS, posix_dup__doc__},
|
||||
12
steps/python-3.1.5/sources
Normal file
12
steps/python-3.1.5/sources
Normal file
|
|
@ -0,0 +1,12 @@
|
|||
https://www.python.org/ftp/python/3.1.5/Python-3.1.5.tar.bz2 3a72a21528f0751e89151744350dd12004131d312d47b935ce8041b070c90361
|
||||
http://ftp.unicode.org/Public/3.2-Update/UnicodeData-3.2.0.txt 5e444028b6e76d96f9dc509609c5e3222bf609056f35e5fcde7e6fb8a58cd446
|
||||
http://ftp.unicode.org/Public/3.2-Update/CompositionExclusions-3.2.0.txt 1d3a450d0f39902710df4972ac4a60ec31fbcb54ffd4d53cd812fc1200c732cb
|
||||
http://ftp.unicode.org/Public/3.2-Update/EastAsianWidth-3.2.0.txt ce19f35ffca911bf492aab6c0d3f6af3d1932f35d2064cf2fe14e10be29534cb
|
||||
http://ftp.unicode.org/Public/3.2-Update/DerivedCoreProperties-3.2.0.txt 787419dde91701018d7ad4f47432eaa55af14e3fe3fe140a11e4bbf3db18bb4c
|
||||
http://ftp.unicode.org/Public/3.2-Update/DerivedNormalizationProps-3.2.0.txt bab49295e5f9064213762447224ccd83cea0cced0db5dcfc96f9c8a935ef67ee
|
||||
http://ftp.unicode.org/Public/5.1.0/ucd/UnicodeData.txt 8bd83e9c4e339728ecd532c5b174de5beb9cb4bab5db14e44fcd03ccb2e2c1b5 UnicodeData-5.1.0.txt
|
||||
http://ftp.unicode.org/Public/5.1.0/ucd/CompositionExclusions.txt 683b094f2bdd0ab132c0bac293a5404626dd858a53b5364b3b6b525323c5a5e4 CompositionExclusions-5.1.0.txt
|
||||
http://ftp.unicode.org/Public/5.1.0/ucd/EastAsianWidth.txt a0d8abf08d08f3e61875aed6011cb70c61dd8ea61089e6ad9b6cf524d8fba0f2 EastAsianWidth-5.1.0.txt
|
||||
http://ftp.unicode.org/Public/5.1.0/ucd/DerivedCoreProperties.txt 8f54c77587fee99facc2f28b94e748dfdda5da44f42adab31a65f88b63587ae0 DerivedCoreProperties-5.1.0.txt
|
||||
http://ftp.unicode.org/Public/5.1.0/ucd/DerivedNormalizationProps.txt 4fc8cbfa1eed578cdda0768fb4a4ace5443f807c1f652e36a6bd768e81c2c2a3 DerivedNormalizationProps-5.1.0.txt
|
||||
http://ftp.unicode.org/Public/MAPPINGS/VENDORS/MICSFT/PC/CP437.TXT 6bad4dabcdf5940227c7d81fab130dcb18a77850b5d79de28b5dc4e047b0aaac
|
||||
Loading…
Add table
Add a link
Reference in a new issue