AI Ethic/Security/Compliance

INCLUSIVE HUB for guys who care about AI Ethic , Security and Compliance .And welcome launch relevant project we are gonna help you find supports.

The rapid acceleration in the pace of Artificial Intelligence (AI)
innovation in recent years and the advent of content generating
capabilities (Generative AI or GenAI) have increased interest in AI
innovation in finance, in part due to the user-friendliness and intuitive
interface of GenAI tools. Currently, the use of GenAI in financial
markets involving full end-to-end automation without any human
intervention remains largely at development phase, but its wider
deployment could amplify risks already present in financial markets
and give rise to new challenges. This paper presents recent evolutions
in GenAI and its slow-paced deployment in finance, analyses the
potential risks from a wider use of GenAI tools by financial market
participants, and discusses associated policy implications.

more detail →

OECD AI principles →OECD Legal Instruments

1 Like

How to get along with AGI, either in positive or tense staff? What’s the most likely confilcts between AGI and human ? What’s the ultimate trigger of those conflicts ?
Let’s assume that AGI do exist and is more capable than human in intelligence both physically and mentally. cording to the theory of mind , belief is the highest form of mind states. And mission is always the parallel manifestation of belief in real world . In other words, mission ,is the leading factor of how AI would like shape the world in the upcoming future. Before answering this question ,understanding the gap of belief between AGI and Human is indeed. While every tech-giants created LLM already have their AI mission in place, we still could not figure out the wholepicture of their mission ,not to mention the understanding gap between AGI and human or would AGI adopt the version from human.

For example , OpenAI’s mission is to ensure that artificial general intelligence benefits all of humanity.Such a short sentence that we even don’t know any details further in two words BENEFITS and ALL. I don’t think i’m the only one who have the following questions :
● How does AI think of the simple word ALL? individuals or human as intergrity?
● What’s the opinions of AI on BENEFITS , espeacially in multicultural or opposite stakeholder contexts, etc?
● Would AI just be a consultant role for us ,or it will be the decision maker even excutor ? And if go deeper , what’s the boundary between consulting and decision-making?
There are so many questions that i couldn’t list all of them. But what i’m sure is those questions has been discussed from diverse aspests , by diverse roles in different way . And there are also some classical experiments, whether in thought or others, some like Trolley Problem typically showing how challenging to trade-off benefits between individuals and human as intergrity. And There do exist theories successfully provide systematically common answer of BENEFITS, like Maslow’s Hierarchy of Needs.BUT the seemingly perfect theory often doesn’t work in the complex context. For exmple, if successful career has the negative correlation with family intimacy in terms of time-spending factor and time is limited to satisefied just one option, which one would surrender.

And if we add more factors like Philosophical and religious preferences or factors , that would be more difficult to answer. in the coming days, i would like to go further in the topic.

Welcome to join the Inculsive dicussion by comments or email. ( , please tag “AI Discussion”)

AI Act: different rules for different risk levels

The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.

Unacceptable risk

Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
  • Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
  • Biometric identification and categorisation of people
  • Real-time and remote biometric identification systems, such as facial recognition

Some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval.

High risk

AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:

  1. AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.

  2. AI systems falling into specific areas that will have to be registered in an EU database:

  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in legal interpretation and application of the law.

All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle. People will have the right to file complaints about AI systems to designated national authorities.

Transparency requirements

Generative AI, like ChatGPT, will not be classified as high-risk, but will have to comply with transparency requirements and EU copyright law:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

High-impact general-purpose AI models that might pose systemic risk, such as the more advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission.

Content that is either generated or modified with the help of AI - images, audio or video files (for example deepfakes) - need to be clearly labelled as AI generated so that users are aware when they come across such content.

Supporting innovation

The law aims to offer start-ups and small and medium-sized enterprises opportunities to develop and train AI models before their release to the general public.

That is why it requires that national authorities provide companies with a testing environment that simulates conditions close to the real world.

Next steps

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

  • The ban of AI systems posing unacceptable risks will apply six months after the entry into force
  • Codes of practice will apply nine months after entry into force
  • Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

@Molly1 Ok-- I’ll bite/play, as it seems no one else has…

I’m not exactly sure what you are going off at-- But I think a huge amount of the consideration depends on how you define/what you select as your initial data set.

I mean, thus far, AI systems don’t ‘magically know what they are doing’ until you train them on something.

Further, since all that training data comes from people, written, drawn, whatever-- Why are you surprised, at present, it doesn’t just amplify many of the biases people already harbor ?

And then, even if we try to force ‘align’ it, you still have major issues like Gemini’s recent fiasco with producing images of black Nazis.

I mean I study this because I think this is really interesting, it is a new tool, though especially for those of us that are actually studying it… We know we are not ‘creating God’ here and I, at least, can give some reflection to your concerns–

However, I think ‘a little’ it is more the news outlets that have to sell papers, or even some of the AI companies themselves, that have to make market share…

Are kind of ‘overhyping’ this whole thing.

Or to finalize in my own way-- I really don’t worry about the people studying/developing/learning AI to use it in the wrong way-- I think all of us have these concerns in the back of our minds. Rather I worry about some uneducated ‘manager’ forcing its outputs on others as an ‘application’.

In response to your insightful input, I appreciate your genuine feedback. Similarly, my interest in this field stems from its intriguing nature and the quest for knowledge. Indeed, the question of whether “AI truly knows what it’s doing” is context-dependent. An instance where an AI could be said to “know what it’s doing” might involve its ability to respond contextually, drawing upon its understanding of the discourse.

When defining AI, we often refer to it as a product or service API that, having undergone rigorous data training, is designed to simulate human-like decision-making or problem-solving capabilities. Therefore, while AI doesn’t inherently possess magical insight, it learns patterns and makes decisions based on the data it’s trained on, which inevitably reflects and potentially amplifies existing biases within society. This underscores the critical importance of diverse, unbiased datasets and ethical considerations during the design and implementation stages of AI technologies.

Moreover, acknowledging the media’s role in sometimes sensationalizing AI-related issues is pertinent. However, it’s equally important for researchers and practitioners alike to continuously refine AI models, ensuring they align with ethical principles and mitigate potential biases effectively.