I remember when fuzzing was just sending long strings to RPC programs, and
tapping the cloaca of all Unix programs, the signal handler, to see
what came out. But now, to be a hacker, you have to be a scientist.
Computer science is a real thing. But most computer scientists I know can't
explain how to do it because it comes out sounding like a deep dive into a
dungeons and dragons campaign run by toddlers. And perhaps, the hardest
thing with computer science is knowing when you're stuck, when the noise
inherent in your system has overwhelmed the signal, and hence, you.
Really I lied. The hardest thing is knowing when someone else on your team
is stuck and being able to reach into their understanding of the System and
unstick them. Because science, like digestion, is a team sport.
You can, if you want, undertake fun experiments. For example, you could as
a hacker just say publicly what 0day you know are sitting around, waiting
to be found. You can be as loud and annoying about it as possible, then
just wait a few years and see if there are any cool BlackHat talks on the
subject
<https://www.blackhat.com/us-20/briefings/schedule/#room-for-escape-scribbli…>
or not, and if the market makes any particular changes to how it deals with
that technology. There will not be any. This might make you ask more
questions - more uncomfortable ones.
"What is an acceptable parasitic load in a system?" you might ask in this
way. In the animal kingdom, it is an astonishing 40%
<https://www.nationalgeographic.com/animals/article/animals-evolution-parasi….>.
In computers, it is probably the same, where the science of hacking is
equally ignored and reviled, both profitable and prophetless.
Most hackers you know are specialized in the mystic art of Transformation.
A heap overrun becomes an information leak which becomes code execution. A
denial of service becomes a side channel attack becomes a local privilege
escalation. Sometimes it's hard to see the science in this. A friend of
yours will look down upon it as "just engineering". But it's not enough to
just find one bug anymore, or even one transformation of a single bug.
Every bug must pass through multiple slits at the same time now, like a
lost waveform. This takes some science.
My point is this: If you think you are defending against Engineers, but
really you are defending against Scientists, you've already lost. And if a
country wants to build and maintain offensive power in cyberspace, it has
to understand how to care for and nurture the places that treat it as a
science.
-dave
The most annoying thing with talking to computer scientists about anything
is they will look at any problem that remotely touches software and ask you
"Is that the right data structure? Are you ... sure?"
Like, this is what happens to every programming language - it's why you get
NaN or an empty list for any given arbitrary code fragment in Javascript.
People had a normal data structure, say a dictionary, and were like "What
if we OPTIMIZED IT for all the common situations?" And so now a Dictionary
is like a hybrid "Dictionary-List-Cache-Semi-Ordered-ViewMap" and it
changes everything about how it operates according to some internal
heuristic only some ancient and primal god of mischief could understand.
So when someone asks me why, in certain cases, my program returns weird
results right now, the REAL answer is, "Some computer scientist took what
could have been a perfectly good data structure, and gave it performance
anxiety". But project managers hate that answer. So instead they get to
hear about graph databases.
This brings me to two important and closely linked subjects: SBOM, and
venomous jellyfish.
As the mighty Halvar Flake once probably said to himself, "I can take any
hideously boring problem, and turn it into a fascinating and only a bit
unsolvable graph algorithm solution!" And this is where SBOMs currently
live.
Software is amazing, and people in cyber policy like to think of it as if
it was a book or long journal article, and you can take a snapshot of it,
and send it to your friend Bob with a version number 1.0 on it and Track
Changes and then they send it back with a version number 1.1 or
1.0-BobEdits and that's that.
But that is only what Loki, the god of lies, wants you to think.
Environment is a huge part of the equation! You can go to your local pond,
and get a carnivorous tadpole, an angry little hungry frog baby with a
giant beak that eats other frog babies, and show it to a biologist and ask
them the species, and they will tell you Spadefoot, and then find one
eating plants in the corner, just a cute little guy, and the biologist will
also tell you Spadefoot, and when you look at them confused they will shrug
and mumble something about phenotypic plasticity which is clearly a bunch
of words they made up to sound cool.
It is so with software. What software are we running? Well, the description
of software is rarely smaller than the software itself. It is usually much
bigger.
An SBOM could be described as a nested manifest of metadata about software.
But if you say that to a computer scientist you found drowsing on the beach
they will perk up like an evil sea otter who has spotted a bivalve and say
"Wait, are you sure about that data structure? Is it truly a directed
acyclic TREE structure, or is it more a ..." an awkwardly long pause will
ensue as they struggle to control their emotions "...graph?" At this point
you will realize you've made a mistake.
If you somehow manage to escape by diverging the topic into something about
eagles and Mordor, you can go on your merry way, building and selling tools
that work on Trees and only Trees. You will be arboreal, but rich. And then
someday someone will deliver a copy of Nature to your yacht by mistake,
which by this point will be the only way to traverse most of the East
coast, and you'll read about Jellyfish, or as the biologists will haughtily
inform you are now called simply, "Jellies". (The less money a scientist
makes, the more haughty and good looking.)
But because you are a "learned person" you will read this article
<https://www.nature.com/articles/news.2008.1134> about jellyfish and they
will let you know about horizontal gene transference, which breaks every
idea you had about how evolution worked. But it also might remind you about
backporting and cherry-picking and a lot of crazy stuff that happens in the
software world. So you might boat over to where that computer scientist
was, and ask them maybe if they can port all your Tree-working code to
Graph-working code.
And then, unfortunately for you, the story gets dark.
-dave
For many years GraphQL implementations have had massive issues with
access control/authorization and denial of service. This is a common
problem when you are essentially give a database prompt to the client.
GraphQL is better off on the back end only, IMO.
At OWASP we have an older cheatsheet on this topic that gets a lot of hits.
https://cheatsheetseries.owasp.org/cheatsheets/GraphQL_Cheat_Sheet.html
If you have suggestions to make this better let me know!
Regards,
jim(a)manicode.com
>
>> On Mar 5, 2022, at 10:14 AM, Dave Aitel via Dailydave
>> <dailydave(a)lists.aitelfoundation.org> wrote:
>> One of the best ways to get more performance out of your networked
>> system is to trust the client more. This is always a bad idea from a
>> security perspective, as everyone on this list knows, but it's fun to
>> see it reincarnated a thousand times in different bodies.
>>
>> So for example if your web application has endless structured data
>> always changing and you're sick of writing REST APIs and middleware
>> you start thinking - what if instead, I had a flexible Javascript API
>> in the client that just grabbed data right from the database, and
>> that database did the user-authorization?
>>
>> Anyways, GraphQL is interesting and makes the hacker in me, and
>> probably all of you, hungry. In general, from a security perspective,
>> "Let the user talk directly to the database, but we FILTER it a bit"
>> is always a hilarious losing perspective, like trying to outswim a
>> shark, or having just one more drink of Goldschlager.
>>
>> Any filter or translation layer starts introducing protocol
>> desynchronization vulnerabilities, of course, but also you have to
>> worry about timing oracles, denial of service via resource
>> exhaustion, authorization mistakes, and a whole host of nightmares
>> that hackers at this point can pull out of endless old Bugtraq posts
>> whenever they feel like they need a conference talk at your expense.
>>
>> People often make the mistake of correlating "No plugin exists in
>> BURP for this attack surface" with "this new technology is more
>> secure than the last one!"
>>
>> What confuses me is when people deploy huge web applications based on
>> this sort of thing you would think they would at least ask the giant,
>> VC funded companies, "Which security team looked at and gave you a
>> review of this tech? What if the whole thing is a bad idea?" Like in
>> five years are we going to realize that you can't give users the
>> ability to run arbitrary regular expressions on your extremely
>> complicated database without so many checks and balances that it
>> ruins the whole point of having them connect to the database in the
>> first place? Yes, yes we are.
>>
>> On one hand, this is a sad state of affairs. On the other hand, who
>> are we without it? This failure is the upwelling current that brings
>> nutrients from the ocean floor to our arctic habitat. This is the
>> solar wind of quantum bits we float from planet to planet on. This is
>> the brief touch of a child's hand on the belly of the Buddha. This is
>> truth in the way that we know it.
>>
>> -dave
>> ----
>> Resources:
>> https://blog.forcesunseen.com/a-primer-for-testing-the-security-of-graphql-…
>> https://medium.com/csg-govtech/closing-the-loop-practical-attacks-and-defen…
>> _______________________________________________
>> Dailydave mailing list -- dailydave(a)lists.aitelfoundation.org
>> To unsubscribe send an email to dailydave-leave(a)lists.aitelfoundation.org
One of the best ways to get more performance out of your networked system
is to trust the client more. This is always a bad idea from a
security perspective, as everyone on this list knows, but it's fun to see
it reincarnated a thousand times in different bodies.
So for example if your web application has endless structured data always
changing and you're sick of writing REST APIs and middleware you start
thinking - what if instead, I had a flexible Javascript API in the client
that just grabbed data right from the database, and that database did the
user-authorization?
Anyways, GraphQL is interesting and makes the hacker in me, and probably
all of you, hungry. In general, from a security perspective, "Let the user
talk directly to the database, but we FILTER it a bit" is always a
hilarious losing perspective, like trying to outswim a shark, or having
just one more drink of Goldschlager.
Any filter or translation layer starts introducing protocol
desynchronization vulnerabilities, of course, but also you have to worry
about timing oracles, denial of service via resource exhaustion,
authorization mistakes, and a whole host of nightmares that hackers at this
point can pull out of endless old Bugtraq posts whenever they feel like
they need a conference talk at your expense.
People often make the mistake of correlating "No plugin exists in BURP for
this attack surface" with "this new technology is more secure than the last
one!"
What confuses me is when people deploy huge web applications based on this
sort of thing you would think they would at least ask the giant, VC funded
companies, "Which security team looked at and gave you a review of this
tech? What if the whole thing is a bad idea?" Like in five years are we
going to realize that you can't give users the ability to run arbitrary
regular expressions on your extremely complicated database without so many
checks and balances that it ruins the whole point of having them connect to
the database in the first place? Yes, yes we are.
On one hand, this is a sad state of affairs. On the other hand, who are we
without it? This failure is the upwelling current that brings nutrients
from the ocean floor to our arctic habitat. This is the solar wind of
quantum bits we float from planet to planet on. This is the brief touch of
a child's hand on the belly of the Buddha. This is truth in the way that we
know it.
-dave
----
Resources:
https://blog.forcesunseen.com/a-primer-for-testing-the-security-of-graphql-…https://medium.com/csg-govtech/closing-the-loop-practical-attacks-and-defen…
[image: image.png]
If cities were 100% accurately represented by video games, Miami would of
course be *GTA: Vice City*, a story of simplistic corruption garishly lit
and stuck in 2002 forever. It's traditional to hate on Miami, right until
you make some crypto money and decide to move there into a condominium with
a stunning view and an equally stunning lack of maintenance or foresight
around rising water tables.
Seattle, on the other hand, is *Cyberpunk 2077*, a city run by
cybernetically enhanced corpos who get to work by walking past endless
discarded refuse and homeless tent cities heated with literal barrel
campfires - on my way to the airport yesterday we drove through some thick
fog, which the Uber driver explained to me was just "a fire under the
bridge" with the same level of casual interest he would apply to a sale at
a JC Penny's.
Traveling between these two cities imposes arbitrage costs on your
consciousness itself, extracting profit from your inability to look away
from what seems like an obvious oncoming disaster. What happens when the
ocean goes up another foot, and nobody can get flood insurance? you ask
yourself, as people around you wave you off. How come such a progressive
city can't serve its people's needs, or at the very least pick up their
trash? you wonder, while running past a well worn armchair next to the
freeway that, rain or shine, serves as someone's impromptu throne.
While the humanity in you rages against the system, the hacker in you
realizes that knowing the past and processing it to produce the future can
be as useless and predictable as an earthworm's digestion. Hackers live in
a realm between spaces and times, looking at the hidden connections and
occasionally playing a chord on the threads.
-dave
https://twitter.com/SecurePeacock/status/1486156096259637250?s=20
[image: image.png]
So I wanted to respond to this post which starts "If someone exploits an
0day they still have to setup C2 - this is where TTPs are generated that
Blue Teams can win against". And I think for the past year I've gone on a
huge journey of discovery, annoying my Cyber Threat Intelligence friends to
no end as I ask annoying questions like "After you put some random
non-googlable name up, like PLATINUM, can you just add a little flag so I
know what country you're talking about?""
[image: image.png]
(Argh. The whole point of codenames is they are UNIQUE and easy to search
for. This is like naming your OS "Windows" I guess.)
Anyways, imagine if seventeenth century biologists were reporting to the
newly established Royal Society and they were talking about counting all
the animals and doing studies on animals and of course what they used to do
that were the various animals that kings and whatnot had gotten stuffed and
sent to them. I feel like you would find out that almost all animals had
fur and were easily shot by muskets or stabbed with spears! I guess my
point here being: Cyber Threat Intelligence is in a very hard place right
now, despite soaring revenues and many exciting trophies on the wall.
What you hear, over and over again, is that yes, detecting exploitation is
hard, but you will be able to detect "lateral movement" and see the command
and control traffic, and when attackers need to "accomplish their mission"
they will therefore be detectable. And this is true - for some missions,
and for some operational concepts
<https://cybersecpolitics.blogspot.com/2020/05/asynchronous-command-and-cont…>
that accomplish those missions. But we fail when we don't consider other
operational concepts and other missions. Apparently we call the many
reasons we fail to turn data into warnings and then into action:
"pathologies".
Good marketing from XDR companies is a pathology in this space. And that
pathology goes to the highest levels - when we have leaders in govt say "We
don't see any serious Log4J exploitation" we have to think "Wait, we have
almost no visibility for Unix targets though". Even when we have the right
telemetry, we don't have the right analysis.
I like to probe our pathologies with annoying questions:
- What percentage of worms do we see?
- What happens when people don't use a C2 but just drop an implant?
- Who are the hacker groups focusing only on Unix?
- What percentage of 0day do we really even find?
- Are we looking only at our adversary's actions, or also our own to
make trendlines?
But more than that, we are not self-conscious in the way that we should be
about our own analytical pathologies. This is because our academic
structure for peer review and everything else in this space is pretty
busted. Anyways, there's more to the world out there than just lions and
antelope and espionage RATs. To see the really interesting things you need
a microscope, and the kind of eyes that want to squint through the lenses
of microscopes we haven't even built yet.
-dave
______ _ _ ____ ___ _ _
/ / _ \ ___ ___ | |_ ___ __| |/ ___/ _ \| \ | |
/ /| |_) / _ \ / _ \| __/ _ \/ _` | | | | | | \| |
/ / | _ < (_) | (_) | || __/ (_| | |__| |_| | |\ |
/_/ |_| \_\___/ \___/ \__\___|\__,_|\____\___/|_| \_|
*** /RootedCON'2022 - Main activity ***
-=] About RootedCON
RootedCON is a technology congress that will be developed in Madrid
(Spain) from 10 - 12 of march 2022.
With an estimated seating from 2.500 to 3.000 people, is the most
relevant specialized congress that is held in the country, and one of
the most relevant in Europe, with attendee profiles ranging from
students, Law Enforcenment Agencies to professionals in the technology
and information security market and, even, just passionate people.
This is our XII edition, after the restrictions pause. And as in every
edition, we want it to make it special :)
-=] Talk types
We will mostly accept two kind of talks:
- Fast talks: 20 minutes.
- Standard talks: 50 minutes.
There will exist a limited number of talks of both types having, even,
the possibility of working with the schedule to extend a talk beyond the
20 minutes limit, or to reduce a 50 minutes one.
We encourage you to BE ORIGINAL with your proposals. We accept *rare*
talks and thematic, on culture or politics (always orbiting around the
concepts technological or Information Security).
-=] International speakers
There is simultaneous Spanish-English and English-Spanish translation on
all tracks, so please do not hesitate to inscribe a talk, wherever you
are from :)
Be sure to indicate the language in which you will give it:
[ES] - Spanish
[EN] - English
=] Topics we are looking for
Any interesting topic related to TECHNOLOGY, having examples below and
not limited to:
- ANY original topic that contributes content to our audience!
- APT, botnets and malware.
- Obviously, ransomware!
- BIO Hacking and alternative disciplines
- Any hacking topic in any environment: IP, OT, IoT, Cloud, EDGE,
Satellites, Mobiles...
- Reverse engineering, debugging, hooking, fuzzing, exploiting, DFIR,...
- Financial Tech (FinTech)
- Hardware Hacking, Jtag, SWJ, Dap, consoles,...
- Videogames, cheats...
- Cryptography, steganography, covert channels,...
- DEV/SEC/OPS.
- DEV: MQTT, AMQP, development patterns, distributed development,
CI/CD...
- OPS: puppet, jenkins, orchestration, virtualization and containers,
artifacts,...
- Culture, philosophy and ethics, future, innovation ... the world!
-=] Talk submission procedure
We will only accept talks submitted throught the official speaker form:
https://cfp.rootedcon.com/ (both english and spanish)
Any other talk submission will be considered "unofficial" and will not
have any guarantee in being selected.
-=] Speaker benefits and privileges
Every speaker will get these benefits and privileges:
- ONE extra ticket for a partner (1 ticket) to attend the event.
- Diner with all the speakers, RootedLABS trainees, sponsors and the
RootedCON team.
- Accommodation (RootedCON carries with the costs, even the partner)
- Travelling (RootedCON carries with the speaker's costs)
- Full access to all congress areas all the event long.
- The possibility of repeating the speech up to three times, one in
every track (depending of the final rating).
- Some free drinks in the party :)
- Potential job offers management.
- A gift from the organization.
-=] Obligations and duties of the speaker
All speakers that inscribed a talk and get selected must:
a) Confirm that the talk is TECHNICAL and it is supported with Proof of
Concepts (PoC). If PoC are not available, it should be justified.
Lately all I've been doing is data science but I've been trying to keep up
with some of the cool work happening in the cybers as well. One project I
think is especially cool is the Joern Ghidra2CPG project.
https://twitter.com/fabsx00/status/1466302205019971586?s=20https://joern.io/blog/joern-supports-binary/
The theory is that you can use the Ghidra decompiler, then have a code
property graph, which they store in a special purpose in-memory graph
database (that should probably be ported to A REAL GRAPH DATABASE). Then
you can make queries in scala (ugh) against that DB to find bugs.
One example is here:
https://github.com/joernio/query-database/blob/main/src/main/scala/io/joern…
Has anyone else tried using this?
It'd be cool instead of doing source sink to do clustering and missing link
analysis and MLlib against the graph database. Also a real graph db might
be able to scale better... but regardless, this is the kind of cool project
I hoped to see when Ghidra first came out!
This paper as well seems quite relevant:
https://twitter.com/0xadr1an/status/1466518964029169672?s=20
The Convergence of Source Code and Binary Vulnerability Discovery – A Case
Study
https://www.s3.eurecom.fr/docs/asiaccs22_mantovani.pdf
-dave
So I definitely have a different mental history of active directory than
most people, and recently I was doing a Glasshouse podcast with Pablo Breuer
<https://www.linkedin.com/in/pablobreuer/> and here
<https://youtu.be/Z0d6qNLevUY?t=2714> he says basically the same thing
everyone says, which is that it's impossible to move off of technology even
when that technology has a history of severe flaws, or a design flaw that
means it cannot be secured.
This is the current mental stance among CIOs familiar with large companies,
or even medium size companies! And I get it! But if leopards keep eating
your face, and every hacker in the world keeps recommending you stop giving
them a cuddle, and you say "I can't, I have legacy systems in my head that
love to hug large dangerous cats" then that stops being the government's
problem, in a way. Like when people ask why Cyber Insurance Markets are
obvious catastrophic failures, and we point at how they can't really change
any meaningful behavior, and they have to insure the total market value of
whatever company they are insuring because the cost of risk is basically a
sliding scale of whatever the Russian ransomware team thought up that
morning over kasha, then everyone gets that surprised face and it's all
very annoying.
So anyways, that brings us back to AD. AD is a system where any time you
hack any computer on the network, you can become the domain controller, and
own the whole company. That's just how it works. Every hacker/penetration
tester has known that for two decades and the specific incantation on how
you do that changes slowly over time, but it's always true. And then at
INFILTRATE one year two Microsoft Research team members demonstrated an
automation of the lateral movement piece which is now what Bloodhound
<https://mcpmag.com/articles/2019/11/13/bloodhound-active-directory-domain-a….>is.
So in theory everyone knows this right now, even though they like to blame
EternalBlue for all their problems in life.
But when you point that out on Twitter
<https://twitter.com/dinodaizovi/status/1418909301746327559?s=20>, people
ask you what the alternative is, and I have to admit I disagree with DDZ
that it's "Zero Trust". That sounds like adding more complexity to a system
that is already SO COMPLEX even lifetime specialists not named James
Forshaw don't understand the BASICS of the authentication system.
Like here's a paper
<https://twitter.com/DebugPrivilege/status/1418884269376671755?s=20> that
came out today that's in my queue all about Service credentials, and look -
no matter how many new auditing tools or visualization thingies or AI
anomaly detection alerts you deliver to your customers, if the underlying
system is NOT UNDERSTANDABLE BY HUMANS then you can't secure it. I
guarantee you that about 80% of the Russian ransomware affialiates
understand Service Credentials and delegation better than your current AD
management lead. Most of the time your AD ACLs are just you fooling
yourself that you have a security boundary where you, in fact, don't.
Also, the problem is not NTLM. Everyone stop talking about NTLM. It
wouldn't matter if AD was re-implemented to use purely quantum key exchange
because only Gandolf can mentally visualize the transitive trust structures
implicit in how you configured your AD Forests.
Ok so that brings us back to: What do you do instead? And honestly, I don't
know. I've enjoyed reading the snippets that Grapl Security
<https://www.graplsecurity.com/> has been posting about their setup. As far
as I can gather, the TL;DR is just use Google as your directory server and
use Chromebooks as much as possible.
This is what I do right now - but I'm not sure how scalable this is. Maybe
y'all can pitch in on this thread and suggest a solution?
Thanks,
Dave Aitel
[image: image.png]
Ok ya'll - you're letting me down. There's a thousand ways you and your
friends can use 10k to improve the world - engineering a solution nobody
would pay for because it's not something you can put at a booth at RSAC.
EVERYONE ON THIS LIST needs to either submit for a grant, or find someone
who will submit for a grant. You're telling me not one of those
superhackers at Microsoft and Google can find a worthy project? It's
Thursday, and there's 5000 people on this list, each of which can destroy
whole systems of the world with their minds, but actually all I want now is
for them to work up the energy to fill out this google form
<https://nostarchfoundation.org/apply-now/>. I have a whole team of very
cool people <https://nostarchfoundation.org/our-board/> waiting to help
walk you through the process once you do.
And the grant recipients that get selected are also going to get mentored
by experienced members of the field - the head of the mentorship committee
<https://en.wikipedia.org/wiki/Fred_Davis_(entrepreneur)> started a little
zine back in the day called Wired and knows basically EVERYONE, and I think
the mentorship alone should convince you to submit a grant request.
Anyways, typey typey. Get to it. :)
-dave
P.S.
Here's one of last year's submissions
<https://nostarchfoundation.org/grant-recipients-by-year/2020-grant-recipien…>,
which I quite like.
[image: image.png]