c – 尝试实现无锁队列时堆栈溢出

我基于Maged M. Michael和Michael L. Scott工作
Simple, Fast, and Practical Non-Blocking and Blocking
Concurrent Queue Algorithms
中指定的算法实现了一个无锁队列(对于算法,跳到第4页)

我在shared_ptr上使用了原子操作,例如std :: atomic_load_explicit等.

当只在一个线程中使用队列时,一切都很好,但是当从不同的线程使用它时,我得到一个堆栈溢出异常.

遗憾的是,我无法追查问题的根源.似乎当一个shared_ptr超出范围时,它会减少下一个ConcurrentQueueNode上的引用数量,并导致无限递归,但我看不出为什么..

代码:

队列节点:

template<class T>
struct ConcurrentQueueNode {
    T m_Data;
    std::shared_ptr<ConcurrentQueueNode> m_Next;

    template<class ... Args>
    ConcurrentQueueNode(Args&& ... args) :
        m_Data(std::forward<Args>(args)...) {}

    std::shared_ptr<ConcurrentQueueNode>& getNext() {
        return m_Next;
    }

    T getValue() {
        return std::move(m_Data);
    }

};

并发队列(注意:不是为了胆小的人):

template<class T>
class ConcurrentQueue {
    std::shared_ptr<ConcurrentQueueNode<T>> m_Head, m_Tail;

public:

ConcurrentQueue(){
    m_Head = m_Tail = std::make_shared<ConcurrentQueueNode<T>>();
}

template<class ... Args>
void push(Args&& ... args) {
    auto node = std::make_shared<ConcurrentQueueNode<T>>(std::forward<Args>(args)...);
    std::shared_ptr<ConcurrentQueueNode<T>> tail;

    for (;;) {
        tail = std::atomic_load_explicit(&m_Tail, std::memory_order_acquire);
        std::shared_ptr<ConcurrentQueueNode<T>> next = 
            std::atomic_load_explicit(&tail->getNext(),std::memory_order_acquire);

        if (tail == std::atomic_load_explicit(&m_Tail, std::memory_order_acquire)) {
            if (next.get() == nullptr) {
                auto currentNext = std::atomic_load_explicit(&m_Tail, std::memory_order_acquire)->getNext();
                auto res = std::atomic_compare_exchange_weak(&tail->getNext(), &next, node);
                if (res) {
                    break;
                }
            }
            else {
                std::atomic_compare_exchange_weak(&m_Tail, &tail, next);
            }
        }
    }

    std::atomic_compare_exchange_strong(&m_Tail, &tail, node);
}

bool tryPop(T& dest) {
    std::shared_ptr<ConcurrentQueueNode<T>> head;
    for (;;) {
        head = std::atomic_load_explicit(&m_Head, std::memory_order_acquire);
        auto tail = std::atomic_load_explicit(&m_Tail,std::memory_order_acquire);
        auto next = std::atomic_load_explicit(&head->getNext(), std::memory_order_acquire);

        if (head == std::atomic_load_explicit(&m_Head, std::memory_order_acquire)) {
            if (head.get() == tail.get()) {
                if (next.get() == nullptr) {
                    return false;
                }
                std::atomic_compare_exchange_weak(&m_Tail, &tail, next);
            }
            else {
                dest = next->getValue();
                auto res = std::atomic_compare_exchange_weak(&m_Head, &head, next);
                if (res) {
                    break;
                }
            }
        }
    }

    return true;
}
};

重现问题的示例用法:

int main(){
    ConcurrentQueue<int> queue;
    std::thread threads[4];

for (auto& thread : threads) {
    thread = std::thread([&queue] {

        for (auto i = 0; i < 100'000; i++) {
            queue.push(i);
            int y;
            queue.tryPop(y);
        }
    });
}

for (auto& thread : threads) {
    thread.join();
}
return 0;
}

最佳答案 问题是竞争条件可能导致队列中的每个节点等待一次释放 – 这是递归的并且会使您的堆栈崩溃.

如果将测试更改为仅使用一个线程但不弹出,则每次都会出现相同的堆栈溢出错误.

for (auto i = 1; i < 100000; i++) {
  queue.push(i);
  //int y;
  //queue.tryPop(y);
}

您需要递归删除节点链:

__forceinline ~ConcurrentQueueNode() {
    if (!m_Next || m_Next.use_count() > 1)
        return;
    KillChainOfDeath();
}
void KillChainOfDeath() {
    auto pThis = this;
    std::shared_ptr<ConcurrentQueueNode> Next, Prev;
    while (1) {
        if (pThis->m_Next.use_count() > 1)
          break;
        Next.swap(pThis->m_Next); // unwire node
        Prev = NULL; // free previous node that we unwired in previous loop
        if (!(pThis = Next.get())) // move to next node
            break;
        Prev.swap(Next); // else Next.swap will free before unwire.
    }
}

我之前从未使用过shared_ptr,所以我不知道是否有更快的方法来做到这一点.此外,由于我以前从未使用过shared_ptr,我不知道你的算法是否会受到ABA问题的影响.除非在shared_ptr实现中有一些特殊的东西来防止ABA,我担心以前释放的节点可以被重用,欺骗CAS.当我运行你的代码时,我似乎从未遇到过这个问题.

点赞