The Road to LLL in SAGE
Damien Stehle's fpLLL code is wrapped at #723 or fplll.patch respectively. For some problems, this gives quite good performance already:
1 sage: B = MatrixSpace(IntegerRing(), 50, 51)(0)
2 sage: for i in range(50): B[i,0] = ZZ.random_element(2**10000)
3 ....:
4 sage: for i in range(50): B[i,i+1] = 1
5 ....:
6 sage: time C = B.LLL('fpLLL:fast')
7 CPU times: user 9.54 s, sys: 0.00 s, total: 9.54 s
8 Wall time: 9.56
9
10 sage: C.is_LLL_reduced()
11 True
12
13 sage: BM = B._magma_()
14 sage: time CM = BM.LLL()
15 CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
16 Wall time: 15.20
17
18 sage: time magma.eval("CM := LLL(%s:Fast:=1)"%BM.name())
19 CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
20 Wall time: 11.68
However, the main strength of MAGMA's LLL is that it chooses a reasonably 'good' LLL implementation automatically. This is described in Damien Stehle's paper and timings can be found in some of his slides. For those examples SAGE seems to perform quite poorly.
LLL Heuristic
To develop a simple heuristic how to choose a LLL implementation, we thought about using the following benchmarking examples. All these examples are generated using Stehle's generate.c} code and follow his slides for dimensions and bitsizes.
- 1000 dimensional matrices filled uniformly random with integers of 10, 100, or 1000 bits respectively.
- matrices as they occur for the Knapsack problem with (dimension,bitsize) pairs of (10, 100000), (100,10000), (150,5000)
- matrices as they appear for solving simultaneous Diophantine equations of (dim,bits) pairs (3, 128), (12, 10000), (76, 5000)
- Ajtai (d, bits) (10, 7), (2, 13), (3, 11)
- particular bad matrices with entries sized at 64, 128, and 500 bits.
- NTRU (dim, bits, q) (10,100,12), (100,100,35), (5,1000,75)