Giter Club home page Giter Club logo

myklob / ideastockexchange Goto Github PK

View Code? Open in Web Editor NEW
1.0 1.0 1.0 4.51 MB

Empowering Rational Discourse and Decision-Making: The Idea Stock Exchange is a groundbreaking platform designed to revolutionize how we engage in political and societal debates. At its core, this project harnesses the power of collective intelligence, utilizing a structured framework for automated conflict resolution and cost-benefit analysis.

ai-machine-learning algorithmic-transparency automated-crowd-sourced-analysis collective-intelligence conflict conflict-resolution cost-benefit-analysis evidence-based-decision-making evidence-based-policy open-source political-debate-analysis rational-discource

ideastockexchange's People

Contributors

googlecodeexporter avatar myklob avatar

Stargazers

 avatar

ideastockexchange's Issues

A Workable ReasonRank

Here's Python script where the PageRank algorithm is modified to reflect an "ArgumentRank" approach. This modification adds the scores of supporting arguments and evidence and subtracts the scores of weakening arguments and evidence:

import numpy as np

def argumentrank(M, num_iterations: int = 100, d: float = 0.85):
    """ArgumentRank algorithm with explicit number of iterations. Returns ranking of nodes (arguments) in the adjacency matrix.

    Parameters
    ----------
    M : numpy array
        adjacency matrix where M_i,j represents the link from 'j' to 'i', such that for all 'j'
        sum(i, M_i,j) != 0 (due to adding and subtracting)
    num_iterations : int, optional
        number of iterations, by default 100
    d : float, optional
        damping factor, by default 0.85

    Returns
    -------
    numpy array
        a vector of ranks such that v_i is the i-th rank,
        v sums may not equal 1 due to addition and subtraction of arguments

    """
    N = M.shape[1]
    v = np.ones(N) / N
    M_hat = d * M + (1 - d) / N
    for i in range(num_iterations):
        v = np.dot(M_hat, v)
        # Adjustments to ensure scores are not negative and sum to 1 after each iteration
        v = np.maximum(v, 0)
        v /= v.sum()
    return v

# Example adjacency matrix for argument links
M = np.array([[0, -0.5, 0, 0, 1],
              [0.5, 0, -0.5, 0, 0],
              [0.5, -0.5, 0, 0, 0],
              [0, 1, 0.5, 0, -1],
              [0, 0, 0.5, 1, 0]])

v = argumentrank(M, 100, 0.85)
print(v)

This edited script represents an "ArgumentRank" algorithm where the adjacency matrix M now accounts for both strengthening and weakening arguments. The matrix entries are adjusted to add scores for supporting arguments and subtract scores for weakening arguments. Additionally, after each iteration, the algorithm ensures that the scores are normalized (non-negative and sum to 1) to maintain a consistent ranking system.

Arguments Score = Sum of [Sub argument scores] * their [Linkage Scores] * their [Unique Scores]

We should track each ideas score over time, similar to how we track a stock's performance.
Arguments will have scores, depending on the quantity and quality of sub-arguments that strengthen or weaken each argument.
Ultimately a conclusion's strength depends on the relative strength of supporting arguments and evidence.
I propose the following steps to determining an argument's score, and then playing with the features until we get the algorithm "right".

  • Adding the number of pro arguments
  • Subtracting the number of con arguments

Separation Distance, SD = 1

  • Adding the scores of reasons that agree
  • Subtracting the scores of reasons to disagree

Separation Distance, SD = 2

  • Adding the score of reasons to agree with reasons to agree
  • Subtracting the scores of reasons to disagree with reasons to disagree
  • Adding the scores of reasons to disagree with reasons to disagree
  • Subtracting the scores of reasons to agree with reasons to disagree.

Each sub-argument score would be multiplied by it's linkage and unique score, before being counted towards the conclusion score.

A multiplier could be used to reduce the strength of arguments the greater their separation distance, SD.

A working Conclusion Score

The Conclusion Score (CS) in the context of the Idea Stock Exchange and its automated conflict resolution system is a metric designed to quantitatively assess the strength and validity of a particular conclusion or belief. This score is derived from a comprehensive evaluation process that considers various aspects of the arguments and evidence supporting or opposing the conclusion. Here's a breakdown of its components based on the information from your documents:

Reasons to Agree/Disagree (RtA/RtD): These metrics quantify the persuasive power of arguments in favor of or against a conclusion. They include the number and strength of such arguments.

Evidence Assessment (EA/ED): This involves evaluating the solidity and relevance of evidence that reinforces or detracts from an argument. It considers the quality and directness of the evidence in relation to the conclusion.

Logical Validity (LV): This aspect assesses whether an argument is logically coherent and free from fallacies. It is determined through debate outcomes and user responses, specifically focusing on identifying and accounting for any logical fallacies.

Verification (V): indicates how impartial and independent sources corroborate evidence. The score for verification is derived from how well each argument’s evidence is verified and supported.

Linkage (L): This multiplier assesses the direct connection and impact of the argument on the conclusion based on how relevant and integral the argument is to the decision.

Uniqueness (U): This recognizes the distinctiveness of arguments, rewarding originality and reducing redundancy. It involves identifying similar statements and determining a unique argument score when presented in support of the same conclusion.

Importance (I): This measures the significance of the argument and the potential ramifications if the claim is assumed true. The weight of an argument in terms of its importance is determined through debate over its relative importance.

The Conclusion Score (CS) is calculated using the formula:

CS=∑((RtA−RtD)×(EA−ED)×LV×V×L×U×I)

Each element is critical in determining a conclusion's overall strength and credibility. The scoring system is designed to be objective and comprehensive. It incorporates many factors to evaluate conclusions based on the strength and quality of their supporting and opposing arguments and evidence​​.

Each item uses ReasonRank to create a score based on the performance of pro/con sub-arguments. Of course, these sub-arguments also have their own reason rank score.

Modify this MS Access database to work on the internet.

This is an example that has sub-argument scores, multiplied by their linkage scores. It is in MS access. I outlined the pros and cons for a personal decision (baptizing my daughter in the Mormon church in light of problems with the church). Sorry I didn't make a more universal topic.

Truth Score

Enhancing Argument Evaluation with Logical Fallacy and Evidence Verification Scores

Welcome to an innovative approach in the realm of argument evaluation: the synergistic combination of the Logical Fallacy Score and the Evidence Verification Score. This methodology offers a comprehensive evaluation of arguments by examining both their logical coherence and empirical substantiation. The Logical Fallacy Score focuses on identifying logical inconsistencies within arguments, while the Evidence Verification Score rigorously evaluates the empirical backing of these arguments, employing methods such as blind studies and comparative scenario analysis. This dual-score system not only enhances the precision and soundness of our beliefs but also heralds a new chapter in intellectual engagement and discourse. We encourage thinkers and innovators to collaborate in bringing this vision to fruition, thereby transforming the landscape of debate, learning, and cognitive growth.

In the nuanced assessment of argumentative strength, two pivotal metrics are indispensable: the Logical Fallacy Score and the Evidence Verification Score. The Logical Fallacy Score gauges the extent to which sub-arguments reveal specific logical fallacies within the primary argument. Concurrently, the Evidence Verification Score ascertains the extent of independent verification of the belief, for instance, through methods like blind or double-blind studies and the analysis of scenario similarities. These scores collectively contribute to forming more enlightened and rational beliefs.

Recognizing fallacious reasoning is pivotal for fostering a platform conducive to effective group decision-making, establishing an evidence-based political framework, and enabling humans to make judicious collective decisions.

Our proposed methodology involves anchoring the credibility of our beliefs to the robustness of the evidence. Here, evidence manifests in the form of pro/con arguments within human discourse, logically linked to data. Thus, the strength of our convictions is directly correlated to the merit or score of the pro/con evidence.

To this end, we will meticulously evaluate the relative performance of pro/con sub-arguments, assessing each based on its alignment with a specific logical fallacy. When an argument is flagged for containing a logical fallacy, either by user indication or through semantic equivalency algorithms, a specialized space will be allocated for debating the validity of this accusation. The Logical Fallacy Score then quantifies the collective strength of these debates, and our analytical algorithms will categorize these arguments, differentiating between various forms of truth, their relevance, and their connection to the larger conclusion (evidence-to-conclusion linkage).

Commonly encountered fallacies often employed to support conclusions, yet fundamentally non-sequiturs, include:

  • Ad Hominem Fallacy: Attacking the person rather than the argument. Example: "You can't trust his words because he's a convicted criminal," fails to address the actual argument.
  • Appeal to Authority Fallacy: Asserting a claim is true solely based on an authority figure's endorsement, sans additional evidence. Example: "Dr. Smith's endorsement makes it true," lacks independent validation of the argument.
  • Red Herring Fallacy: Distracting from the main argument by introducing irrelevant issues. Example: "Despite my mistake, consider my past contributions to the company," shifts focus from the core issue.
  • False Cause Fallacy: Mistaking correlation for causation. Example: "Our victory after I wore my lucky socks implies they caused the win," erroneously establishes a causal link.

By diligently identifying and circumventing these fallacies, we empower individuals to partake in a more rigorous, evidence-based decision-making framework. This approach not only promises a more efficacious political environment but also cultivates a populace capable of informed and critical thinking. The Logical Fallacy Score, by spotlighting specific fallacious arguments, champions reasoned discourse and intellectual integrity.

Algorithm

  1. Compile a comprehensive list of widely recognized logical fallacies.
  2. Implement a feature allowing users to mark specific arguments as potentially containing one or more logical fallacies.
  3. Provide a platform for users to present evidence and rational discourse either supporting or contesting the assertion that the flagged argument embodies a logical fallacy.
  4. Design an automated system capable of identifying and flagging arguments that exhibit similarities to others already marked for logical fallacies.
  5. Develop a machine learning algorithm to recognize and highlight linguistic patterns and structures commonly associated with logical fallacies.
  6. Conduct a thorough evaluation of each argument flagged for containing a logical fallacy, assessing the strength and validity of sub-arguments for or against the presence of the fallacy in question.
  7. Aggregate the findings from these assessments to determine a Logical Fallacy Score, represented as a confidence interval, reflecting the likelihood of the argument containing the identified fallacy.

It's important to note that the Logical Fallacy Score is just one of many algorithms used to evaluate each argument. We will also use other algorithms to determine the strength of the evidence supporting each argument, the equivalency of similar arguments, and more. The Logical Fallacy Score is designed to identify arguments that contain logical fallacies, which can weaken their overall credibility. By assessing the score of sub-arguments that contain fallacies, we can better evaluate the strength of an argument and make more informed decisions based on the evidence presented.

Code

List of common logical fallacies

logical_fallacies = ['ad hominem', 'appeal to authority', 'red herring', 'false cause']

Dictionary to store arguments with their logical fallacy scores and evidence

argument_scores = {}

Function to evaluate the score of a sub-argument for a specific logical fallacy

def evaluate_sub_argument_score(argument, fallacy, evidence):
# Algorithm to evaluate the score of a sub-argument
# This should include an assessment based on provided evidence
# Placeholder for implementation
score = 0
# Implement logic to calculate the score
# ...
return score

Function to evaluate overall logical fallacy score for an argument

def evaluate_argument_score(argument, evidence):
score = 0
for fallacy in logical_fallacies:
sub_argument_score = evaluate_sub_argument_score(argument, fallacy, evidence.get(fallacy, []))
score += sub_argument_score
return score

Users flag arguments and provide evidence for potential logical fallacies

flagged_arguments = {} # Format: {argument: evidence_dict}

Example: flagged_arguments = {"Argument1": {"ad hominem": [evidence1, evidence2], "red herring": [evidence3]}}

Automated system to identify similar arguments already flagged

Placeholder for implementation

similar_arguments = {} # Format: {argument: [similar_argument1, similar_argument2, ...]}

Placeholder for machine learning algorithm to detect logical fallacies

class FallacyDetector:
def detect(self, argument):
# Implement detection logic
# ...
return detected_fallacies

fallacy_detector = FallacyDetector()

Evaluating logical fallacy scores for flagged arguments

for argument, evidence in flagged_arguments.items():
argument_score = evaluate_argument_score(argument, evidence)
# Determine confidence interval
if argument_score < -2:
confidence_interval = "Very likely fallacious"
elif argument_score < 0:
confidence_interval = "Possibly fallacious"
elif argument_score == 0:
confidence_interval = "No indication of fallacy"
elif argument_score < 2:
confidence_interval = "Possibly sound"
else:
confidence_interval = "Very likely sound"
# Store results
argument_scores[argument] = {'score': argument_score, 'confidence_interval': confidence_interval}


Here is code for YourFallacyDetector: 

```Python
import re
import spacy

class EnhancedFallacyDetector:
    
    def __init__(self):
        # Load an NLP model from spaCy for contextual analysis
        self.nlp = spacy.load("en_core_web_sm")

        self.fallacies = {
            'ad hominem': ['ad hominem', 'personal attack', 'character assault'],
            'appeal to authority': ['appeal to authority', 'argument from authority', 'expert says'],
            'red herring': ['red herring', 'diversion', 'irrelevant'],
            'false cause': ['false cause', 'post hoc', 'correlation is not causation']
        }

        self.patterns = {fallacy: re.compile(r'\b(?:%s)\b' % '|'.join(keywords), re.IGNORECASE)
                         for fallacy, keywords in self.fallacies.items()}

    def detect_fallacy(self, text):
        results = {}
        doc = self.nlp(text)
        for sent in doc.sents:
            for fallacy, pattern in self.patterns.items():
                if pattern.search(sent.text):
                    results[fallacy] = results.get(fallacy, []) + [sent.text]
        return results

With this code, you can call the detect_fallacy method on any text, and it will return a dictionary of detected fallacies and the specific keyword that triggered the detection. For example:

import re
import spacy

class ImprovedFallacyDetector:

def __init__(self):
    # Initialize spaCy for contextual natural language processing
    self.nlp = spacy.load("en_core_web_sm")

    # Define common logical fallacies and associated keywords
    self.fallacies = {
        'ad hominem': ['ad hominem', 'personal attack', 'character assassination'],
        'appeal to authority': ['appeal to authority', 'argument from authority', 'expert opinion'],
        'red herring': ['red herring', 'diversion', 'distract', 'sidetrack'],
        'false cause': ['false cause', 'post hoc', 'correlation is not causation', 'causal fallacy']
    }

    # Compile regex patterns for each fallacy
    self.patterns = {fallacy: re.compile(r'\b(?:%s)\b' % '|'.join(keywords), re.IGNORECASE)
                     for fallacy, keywords in self.fallacies.items()}

def detect_fallacy(self, text):
    # Process the text with spaCy for sentence-level analysis
    doc = self.nlp(text)
    results = {}
    for sent in doc.sents:
        for fallacy, pattern in self.patterns.items():
            if pattern.search(sent.text):
                # Store the sentence text as evidence of the fallacy
                results[fallacy] = results.get(fallacy, []) + [sent.text]
    return results

Example usage

detector = ImprovedFallacyDetector()

texts = [
"You can't trust anything he says because he's a convicted criminal.",
"Dr. Smith said it, so it must be true.",
"I know I made a mistake, but what about all the good things I've done for the company?",
"I wore my lucky socks, and then we won the game, so my socks must have caused the win."
]

for text in texts:
results = detector.detect_fallacy(text)
print(results)


## Future Development Strategy

1. **Expansive and Varied Data Collection**: For the practical training of machine learning models in the system, acquiring a comprehensive and varied dataset is crucial. This dataset should encompass a broad spectrum of logical fallacy examples from diverse fields such as politics, business, and science, and varied media sources, including news articles, social media content, and public speeches.

2. **Incorporation of Field-Specific Insights**: Given the varying prevalence of certain logical fallacies across different domains, integrating specialized knowledge into the algorithms can enhance detection accuracy. For instance, ad hominem attacks are typically more rampant in political arenas than in scientific discourse. Tailoring the system to recognize such domain-specific patterns would significantly improve its effectiveness.

3. **Integration of Human Oversight and Feedback**: Although machine learning algorithms are adept at identifying patterns in extensive datasets, they are not infallible and may overlook subtleties or commit errors. To mitigate this, the system should embrace human intervention and feedback mechanisms. This could involve allowing users to pinpoint overlooked logical fallacies or to correct misidentified instances, thereby refining the system's accuracy.

4. **Ongoing System Enhancement**: The nature of machine learning systems is such that they consistently benefit from iterative refinement and enhancement. This process would entail the continuous aggregation of new data, the fine-tuning of algorithmic approaches, and the assimilation of user feedback. Over time, these efforts would culminate in a more precise and efficient system capable of adeptly pinpointing logical fallacies, thereby contributing to more reasoned decision-making and a better-informed public discourse.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.