Py学习  »  Python

python多处理比常规的慢。我该如何改进?

wowohweewah • 5 年前 • 1614 次点击  

基本上有一个脚本,它对节点/点的数据集进行梳理,以删除重叠的节点/点。实际的脚本更复杂,但我把它简化为基本上一个简单的重叠检查,它对演示没有任何作用。

我尝试了一些锁、队列、池的变体,一次添加一个作业,而不是批量添加。一些最严重的犯罪者的速度慢了几个数量级。最终我以最快的速度完成了任务。

发送到各个进程的重叠检查算法:

def check_overlap(args):
    tolerance = args['tolerance']
    this_coords = args['this_coords']
    that_coords = args['that_coords']

    overlaps = False
    distance_x = this_coords[0] - that_coords[0]
    if distance_x <= tolerance:
        distance_x = pow(distance_x, 2)
        distance_y = this_coords[1] - that_coords[1]
        if distance_y <= tolerance:
            distance = pow(distance_x + pow(distance_y, 2), 0.5)
            if distance <= tolerance:
               overlaps = True

    return overlaps

处理功能:

def process_coords(coords, num_processors=1, tolerance=1):
    import multiprocessing as mp
    import time

    if num_processors > 1:
        pool = mp.Pool(num_processors)
        start = time.time()
        print "Start script w/ multiprocessing"

    else:
        num_processors = 0
        start = time.time()
        print "Start script w/ standard processing"

    total_overlap_count = 0

    # outer loop through nodes
    start_index = 0
    last_index = len(coords) - 1
    while start_index <= last_index:

        # nature of the original problem means we can process all pairs of a single node at once, but not multiple, so batch jobs by outer loop
        batch_jobs = []

        # inner loop against all pairs for this node
        start_index += 1
        count_overlapping = 0
        for i in range(start_index, last_index+1, 1):

            if num_processors:
                # add job
                batch_jobs.append({
                    'tolerance': tolerance,
                    'this_coords': coords[start_index],
                    'that_coords': coords[i]
                })

            else:
                # synchronous processing
                this_coords = coords[start_index]
                that_coords = coords[i]
                distance_x = this_coords[0] - that_coords[0]
                if distance_x <= tolerance:
                    distance_x = pow(distance_x, 2)
                    distance_y = this_coords[1] - that_coords[1]
                    if distance_y <= tolerance:
                        distance = pow(distance_x + pow(distance_y, 2), 0.5)
                        if distance <= tolerance:
                            count_overlapping += 1

        if num_processors:
            res = pool.map_async(check_overlap, batch_jobs)
            results = res.get()
            for r in results:
                if r:
                    count_overlapping += 1

        # stuff normally happens here to process nodes connected to this node
        total_overlap_count += count_overlapping

    print total_overlap_count
    print "  time: {0}".format(time.time() - start)

以及测试功能:

from random import random

coords = []
num_coords = 1000
spread = 100.0
half_spread = 0.5*spread
for i in range(num_coords):
    coords.append([
        random()*spread-half_spread,
        random()*spread-half_spread
    ])

process_coords(coords, 1)
process_coords(coords, 4)

尽管如此,非多处理的运行时间始终小于0.4秒,而多处理的运行时间可以小于3.0秒。我知道这里的算法可能太简单了,无法真正获得好处,但是考虑到上面的情况有将近50万次迭代,而实际情况有更多的迭代,我觉得奇怪的是,多处理速度慢了一个数量级。

我缺少什么/我能做些什么来改进?

Python社区是高质量的Python/Django开发社区
本文地址:http://www.python88.com/topic/42992
 
1614 次点击  
文章 [ 1 ]  |  最新文章 5 年前
Tim Peters
Reply   •   1 楼
Tim Peters    6 年前

建筑物 O(N**2) 序列化代码中未使用的3元素dict,并通过进程间管道传输它们,是保证多处理无法帮助的非常好的方法;-)没有免费的东西-一切都要花钱。

下面是一个执行很多 相同的 不管它是在串行还是多处理模式下运行的代码。一般来说,没有新的口述等 len(coords) ,它从多处理中获得的好处越多。在我的盒子里,20000次的多处理运行需要大约三分之一的挂钟时间。

关键是所有进程都有自己的 coords . 这是通过在创建池时只传输一次来完成的。这应该在所有平台上都有效。在Linux-Y系统上,它可以通过“魔术”而不是分叉进程继承来实现。减少跨进程发送的数据量 o(n×2) O(N) 是一个巨大的进步。

从多处理中获得更多信息需要更好的负载平衡。按原样,打电话给 check_overlap(i) 比较 coords[i] 中的每个值 coords[i+1:] . 较大的 i ,它要做的工作越少,最大值为 只是传输的成本 在进程之间-并将结果传输回-占用了大量的时间 在里面 检查重叠(I) .

def init(*args):
    global _coords, _tolerance
    _coords, _tolerance = args

def check_overlap(start_index):
    coords, tolerance = _coords, _tolerance
    tsq = tolerance ** 2
    overlaps = 0
    start0, start1 = coords[start_index]
    for i in range(start_index + 1, len(coords)):
        that0, that1 = coords[i]
        dx = abs(that0 - start0)
        if dx <= tolerance:
            dy = abs(that1 - start1)
            if dy <= tolerance:
                if dx**2 + dy**2 <= tsq:
                    overlaps += 1
    return overlaps

def process_coords(coords, num_processors=1, tolerance=1):
    global _coords, _tolerance
    import multiprocessing as mp
    _coords, _tolerance = coords, tolerance
    import time

    if num_processors > 1:
        pool = mp.Pool(num_processors, initializer=init, initargs=(coords, tolerance))
        start = time.time()
        print("Start script w/ multiprocessing")
    else:
        num_processors = 0
        start = time.time()
        print("Start script w/ standard processing")

    N = len(coords)
    if num_processors:
        total_overlap_count = sum(pool.imap_unordered(check_overlap, range(N))) 
    else:
        total_overlap_count = sum(check_overlap(i) for i in range(N))

    print(total_overlap_count)
    print("  time: {0}".format(time.time() - start))

if __name__ == "__main__":
    from random import random

    coords = []
    num_coords = 20000
    spread = 100.0
    half_spread = 0.5*spread
    for i in range(num_coords):
        coords.append([
            random()*spread-half_spread,
            random()*spread-half_spread
        ])

    process_coords(coords, 1)
    process_coords(coords, 4)