Call For Papers 2023
Tired of your bosses suspecting conference trips to exotic locations being just a ploy to partake in Security Vacation Club? Prove them wrong by coming to Helsinki, Finland on May 4-5 2023! Guaranteed lack of sunburn, good potential for rain or slush. In case of great spring weather, though, no money back.
CFP and registration both open. Read further if still unsure.
Maui, Miami, Las Vegas, Tel Aviv or Wellington feel so much sunnier once you’ve experienced the lack of infinity pools in Northern Europe. Instead of pools and palms trees, we can offer you actual saunas and a high tech environment, which is a weird combination of demoscene, widespread Linux adoption, mobile Internet with uncapped flat rate data and a long history of IRC and imageboards.
What defines a conference? For t2 it has always been that intimate welcoming atmosphere of a small event, which makes both audience and speakers approachable. There are enough regulars to create the feeling of a community, but not too many that a first-comer would feel being left out. On the content side, we have always been and always will be a technical security conference, emphasizing the cutting edge, world class research. This is an event for the community. Our focus[1] is on technical excellence, not politics or player hating.
t2’23 offers you an audience with a taste for technical security presentations containing original content. This is your chance to showcase the latest research and lessons in EDR simulation and healthcheck spoofing, hardware insecurity, inferring information from interference, cloud-scale forensics or persistance automation, new vulnerability classes, AI exploitation, virtual machines inside parsers, elegant exploitation of old vulnerability classes, modern defense, dropping zero days during presentations, state of the art memory corruption mitigation bypasses, evasions, safe cracking, satellite and space security, remote vehicle access, or whatever research lights up the eyes of seasoned conference visitors. For the hackers by the hackers.
The advisory board will be reviewing submissions until 2023-03-17. Slide deck submission final deadline 2023-04-20 for accepted talks.
First come, first served. Submissions will not be returned.
Quick facts for speakers
+ presentation length 60-120 minutes, in English
+ complimentary travel and accommodation for one person[6]
+ decent speaker hospitality benefits
+ no marketing or product propaganda
Still not sure if this is for you? Check out the blast from the past[2].
The total amount of attendees, including speakers and organizers is limited to 99. Advisory Board recognize[3] the OG Finnish sauna culture is an acquired taste and can promise the lack of sweaty, partially or fully nude sauna-goers at all conference functions.
[0] hunter2
[1] https://t2.fi/about/eula/
[2] https://t2.fi/schedules/
[3] https://youtu.be/Oj8JBBAM5jY?t=40
[6] except literally @nudehaberdasher and @0xcharlie
How to submit
Fill out the form at https://if.t2.fi/action/cfp
How to register
Buy your ticket at https://t2.fi/registration/
(Note, this is a continuation of our previous story chapter since sometimes
it's more fun to read fiction than to wonder what's going on these days
with Cloudflare or whatever.
https://lists.aitelfoundation.org/archives/list/dailydave@lists.aitelfounda…
)
Chapter 2
_________________________________________________
Landing in Miami is like visiting a tier of hell just below Limbo. It is
not saturated in evil so much as the established gateway to more evil
places. As you disembark from your flight you can almost see a direct line
from providing no-questions-asked banking to drug dealers in the eighties,
to offering an endless series of apartments (aka money hiding spots) to the
Venezualan upper class, to the current endless series of crypto companies
headquartered in the newly hip Brickle office spaces next to SmileDirect
and fancy brew pubs.
In the sense that NYC deals in finances, San Francisco in software
companies, Boston in "higher education", Miami is more about your more
generic small-scale scams as the underlying substrate upon which the rest
of the economy is based. The tropics engender a sort of flexibility and
adaptability which is about finding new scam-niches and exploiting them
before anyone else has caught on.
But your meeting here is not about crypto-coin or real estate built with
permeable concrete guaranteed to spall in the face of salt-water-laden
winds. It's with a company building testing software of all things. "Boring
is rewarding" you say to yourself, as you drive past a literal graveyard to
a small joint called "Hush" which you give an approving nod to.
Hush serves fried alligator, which tastes like fried anything, as you sit
across from your lunch companions, Stewart and Amy. They are drinking beers
you've never heard of, and they lay out their scheme, without regard to
OPSEC since nobody in this restaurant other than you would likely
understand it.
"We've been building a large set of unit testing libraries for
cryptographic primitives, lots of complicated string building stuff,
machine learning, you name it."
"Great." You say. "Always good to have quality testing libraries". But they
exchange a look and you realize you've misinterpreted them.
"Our public libraries have a tendency to ... sometimes think things are
very well written and secure when they are ... not. It's just sometimes our
unit testing has bugs, you know? We have really good documentation in a lot
of languages though. And great support. 24/7. Discord, Slack, forums, you
name it."
"So the theory here is you don't target any particular software in the
supply chain? You just encourage bad testing practices?" You're pondering
their value, while at the same time trying to think about what alligator
actually tastes like under all the grease. The flavor, as far as you can
determine, is "Chewy".
Stewart struggles for a second to get the words out, like a huge machine
optimized for literally anything other than the current task of explaining
things to other humans using words. "Sometimes it's best when the check to
see if ASLR is enabled doesn't actually work, so your bugs that you find
have a chance to be exploitable. We're not in the business of putting bugs
in things. We just make the bugs you do have....better."
"I see. What about code we actually want to be secure?"
"I recommend everyone local uses our FIPS certified library, which,
admittedly, is expensive and does the same thing as our free code, but
maybe with more effort put into the actual tests themselves." Amy says this
to you without any hint of chicanery, as if this is a simple fact, almost
not worth saying. It is, you realise, a very tropical CONOP.
"I will make sure this is required by various regulations after you are
funded. I'll have my team send you the paperwork." you say. And with that,
the conversation moves to pleasant nonsense as you internally contemplate
your next flight - out of here and into the cold.
-dave
If you were at a talk at Defcon this year in the Policy track, you probably
heard someone talk about how they, as a government official, are there to
address "market failures". And immediately you thought: This is a load of
nonsense.
Because that government official is not allowed to, and has no intentions
of, addressing any market failures whatsoever. If the Government was going
to address market failures, they'd have to find some way to convince every
cloud provider from making their security features the upsell on the
Platinum package. They'd have to talk about how trying to get into
different markets means every social media company faces huge pressures to
put Indian spies on their network.
Obviously you know, as someone who did not emerge from under a rock into
the security community yesterday, that the answer to having a malicious
insider on your network is probably some smart segmentation, which we call
"Zero Trust" now.
But Zero Trust is expensive! And most social media companies are not
exactly profitable as the great monster known as TikTok has eaten every
eyeball in every market because the very concept of having people
explicitly choose who their friends are is outdated now.
In fact, as everyone is pointing out, almost all companies you know are in
this position! They're cutting costs by sending jobs overseas while
spending huge amounts of money propping up their stock prices and paying
their executives to sell them to a dwindling market of buyers. Private
Equity companies spend every effort on squeezing the last dollar out of old
enterprise software by exploiting the lock-in they have on small
businesses.
And as critical as Twitter is, we have the exact same dynamic with our
privatized water and power companies - who have no plans to make strategic
investments in security or anything really - which is why on public calls
you can hear them humiliating themselves asking Jen Easterly to absorb the
entire costs of their security programs.
The ideal practice for all of these companies is to offload their costs
onto the taxpayer, which is why instead of investing in security, they cry
for the FBI to go collect their bitcoin from whatever ransomware crews are
on their network this week using offensive cyber operations that themselves
cost the government an order of magnitude more than the bitcoin is worth.
As you're sitting in that Defcon talk, listening to someone from government
talk about how they only want to regulate with the "input of industry" or
something, you have to wonder: if this is every company we know, maybe the
market failure isn't just how hard it is to buy a good security product
because they all abuse the copyright system to avoid any kind of
performance transparency. Maybe it's also how hard it is to SELL a good
security product because every single company is trying to cut their budget
to the exact minimum amount that will allow them to tell the FBI they did
their best, and the FBI needs to go out there and pick up their slack.
-dave
As you wander the halls of the inaptly named Caesar's Forum, amidst a
living sea of the most neurodiverse Clan humanity has ever seen, you cannot
help but stop for a second to close your eyes amidst the cacophony and
mentally exclaim, "Look. Look at the world we have created!"
Sitting in the one cafe in the Paris hotel with food, a
tattooed thirty-something who has been to Defcon twice gives you advice on
how to do the conference. "Take the unirail." they say. "Also, you should
have a hacker name! Mine is 'youngblood''"
"Noted!" you respond. These are good ideas. The unirail in particular,
probably, because Vegas is overflowing - and decent food options and
anywhere to sit that is not beeping at you or showing grungy dystopian TV
ads the Cyberpunk 2077 developers would find over-the-top are impossible to
come by, making the conference ten times more exhausting than usual.
In that sense, you miss the Alexis Park days, sitting with Halvar Flake
next to a pool where everyone was more larval than they knew, watching
Dildog lauch BO2K to a thousand screaming fans in the same room Dino Dai
Zovi explained Solaris hacking an hour earlier.
Some of the best talks this year had no attendees at all - Orange Tsai's
talk was over Zoom, to a huge room, but with few butts in the seats. There
were a hundred "Villages" it seemed like, living a half-life between
physical space in the conference room and a Discord channel.
Defcon may be the worst and best place to learn anything in that way - the
environment is hopelessly chaotic, with two talks happening inches away
from each other, and only feet from a DJ pumping out house music. But
perhaps the best environment to learn in is the one in which you are most
inspired?
My friends, we've conquered the world. What's next?
-dave
Right now, there is a, to put it mildly, ongoing discussion between
proponents of coercion and deterrence in cyber policy, and adherents of a
new theory, called *persistent engagement.* Maybe the sum total of the
people in the argument is less than a thousand, but as academic circles go,
it heavily influences the US Defense Department and IC, and through that,
the rest of the world, so it is fun to watch. Also obviously it has added
to infosec Twitter drama, which of course is the most important thing in
the whole Universe.
But while I try to keep this list technical, I wanted to put it into
context for people here, so they can better appreciate the Twitter drama.
But before I do that, I want to talk about a Defcon talk I attended. I'm
not going to say WHICH talk, since it was under Chatham House Rule, but it
was about cyber policy. When I pressed someone on an aspect of their policy
efforts and how it implicated technical experts without involving their
feedback (export control around penetration testing tools) they said "Well,
that was more an expression of our country's VALUES and so we didn't need
to listen to our technical experts".
And I thought that was very interesting! Because the technical community is
highly connected and paying attention to these sorts of things in a way
that didn't used to be the case. If your message on one issue is going to
be "When our values and the technical community's values don't align, we
don't bother listening to them" then they will all know immediately, and
all your other outreach efforts might as well be wasted air.
And this is true across the board - disintermediation via cyber is now a
universal truth.
I believe you can come at the theories of persistent engagement by looking
at it from a different perspective: Instead of saying "Here's a bunch of
data about what we see in cyber, and it looks a certain way, and that way
requires a new way of thinking" you ask yourself whether the fundamental
way of dealing with conflict in international relations literature can be
simplified down to coercion and deterrence when the system is a highly
connected network. In other words, the game theory math you would use for
dyads and bilateral relationships is great for looking at nuclear conflict
because that's how the problem is presented, but doesn't scale to the
problems we have for cyber conflicts, which are about emergent effects of
much more complicated systems.
That's why it's not just different, but downright wrong, to talk about
offense-defense balances when we look at cyber or cyber-enabled conflicts.
It's why the previous international relations work on deterrence and
coercion just doesn't apply cleanly, if at all. On one side (the wrong
side) you have people saying "Cyber is not strategic because it cannot hold
ground like infantry can!" and on the other side you have people building
international relations theories based on cycles of attack, on responses
and counter-responses to aggression in the cyber domain because you can
lead an entire country around by the nose ring that is TikTok.
At some level, we are going to have to stop talking about offensive cyber
operations as a corollary of SIGINT capability, and going to look more
holistically at COGINT.
To sum it up: Complexity in connectivity introduces phase changes in
systems. We now live in a highly connected world, and this means we need
new paradigms of international relations, whether you are under Chatham
House Rule or not.
-dave
People think that finding vulnerabilities is about finding holes in code.
But at some level it's not really about that. It's about understanding that
the code itself is a hole in the swirling chaos of the world and just
letting a little bit of that chaos in allows you to illuminate the whole
thing.
Spending time in Seattle is a little bit like buying a pair of high-powered
binoculars to look down the train tracks at that weird light that's heading
towards you. Seattle is a city perpetually timeless and jet lagged - as far
away as a giraffe's head from the country's dual beating hearts of New York
City and DC.
The city rests on an absolute bedrock of code. Code that feeds on lives
everywhere as voraciously and implacably as a blue whale gulping krill. In
that sense, the inhabitants of Seattle are those who have realized it's
better to be on top of the whale than inside it. It is perhaps why all the
architecture is as boxy as an early software package. If you pulled the lid
of any of the buildings next to the water you might see the packaging for a
Windows 95 CD ROM, or a bunch of floppies with a forgotten database.
When you go running past all these horribly efficient buildings down to the
water on the lone sunny day, you will be surprised with a bunch of naked
people, stripping down next to an old industrial park turned into a
playground, covering themselves in body paint before some eldritch
streaking ritual for the parade over the hill. Around them buzz
photographers immortalizing the moment. Memes infecting other memes like an
endless series of smaller wasp larvae.
Flying back to Miami, amongst the bridal parties and vacationers, over an
endless survey of drying rivers and lakes, the ravages of unchecked climate
change exposing raw pale edges the exact beige color of army pants. The
whole country - a patchwork tinderbox of exposed nerves.
With the right kind of eyes you can see a little bit of chaos being let in.
-dave
If you've walked through the Underworld long enough, you've run into
demons. Or maybe it's the other way around - by running into enough demons,
you might realize you are walking through the Underworld. And by making
friends with them, if you are lucky, you might realize you are a demon
yourself.
[image: image.png]
My brother in Zeus - this is just tempting the Fates.
Every so often an exploit from the Underworld is found. Maybe one or two a
year is dragged screaming curses in a long-dead language out into the
sunlight, pinned against a Kaspersky GReaT blogpost, and vivasected for the
world.
Sometimes these are simple bugs, with complex exploitation chains.
Sometimes these are complex bugs, but with reliable simple exploit chains.
Occasionally you see a host of bugs, all linked together like fire ants
fording a stream. If you've walked through the Underworld enough you'll
simply nod in recognition of them, perhaps stop to admire the artwork of
the Runes carved into their skins by some unknown spellcrafter.
My point is this: it doesn't matter what the real-world utility is for an
exploit, because demons don't care. They operate partially in the future,
perhaps. Or maybe ignoring real-world utility evolved as a sense of
necessity of staying ahead of the eyes hunting for them. I'm not sure. But
my rule - a core axiom of persistent engagement - is that if it can be
done, it is being done already.
-dave
I remember when fuzzing was just sending long strings to RPC programs, and
tapping the cloaca of all Unix programs, the signal handler, to see
what came out. But now, to be a hacker, you have to be a scientist.
Computer science is a real thing. But most computer scientists I know can't
explain how to do it because it comes out sounding like a deep dive into a
dungeons and dragons campaign run by toddlers. And perhaps, the hardest
thing with computer science is knowing when you're stuck, when the noise
inherent in your system has overwhelmed the signal, and hence, you.
Really I lied. The hardest thing is knowing when someone else on your team
is stuck and being able to reach into their understanding of the System and
unstick them. Because science, like digestion, is a team sport.
You can, if you want, undertake fun experiments. For example, you could as
a hacker just say publicly what 0day you know are sitting around, waiting
to be found. You can be as loud and annoying about it as possible, then
just wait a few years and see if there are any cool BlackHat talks on the
subject
<https://www.blackhat.com/us-20/briefings/schedule/#room-for-escape-scribbli…>
or not, and if the market makes any particular changes to how it deals with
that technology. There will not be any. This might make you ask more
questions - more uncomfortable ones.
"What is an acceptable parasitic load in a system?" you might ask in this
way. In the animal kingdom, it is an astonishing 40%
<https://www.nationalgeographic.com/animals/article/animals-evolution-parasi….>.
In computers, it is probably the same, where the science of hacking is
equally ignored and reviled, both profitable and prophetless.
Most hackers you know are specialized in the mystic art of Transformation.
A heap overrun becomes an information leak which becomes code execution. A
denial of service becomes a side channel attack becomes a local privilege
escalation. Sometimes it's hard to see the science in this. A friend of
yours will look down upon it as "just engineering". But it's not enough to
just find one bug anymore, or even one transformation of a single bug.
Every bug must pass through multiple slits at the same time now, like a
lost waveform. This takes some science.
My point is this: If you think you are defending against Engineers, but
really you are defending against Scientists, you've already lost. And if a
country wants to build and maintain offensive power in cyberspace, it has
to understand how to care for and nurture the places that treat it as a
science.
-dave
The most annoying thing with talking to computer scientists about anything
is they will look at any problem that remotely touches software and ask you
"Is that the right data structure? Are you ... sure?"
Like, this is what happens to every programming language - it's why you get
NaN or an empty list for any given arbitrary code fragment in Javascript.
People had a normal data structure, say a dictionary, and were like "What
if we OPTIMIZED IT for all the common situations?" And so now a Dictionary
is like a hybrid "Dictionary-List-Cache-Semi-Ordered-ViewMap" and it
changes everything about how it operates according to some internal
heuristic only some ancient and primal god of mischief could understand.
So when someone asks me why, in certain cases, my program returns weird
results right now, the REAL answer is, "Some computer scientist took what
could have been a perfectly good data structure, and gave it performance
anxiety". But project managers hate that answer. So instead they get to
hear about graph databases.
This brings me to two important and closely linked subjects: SBOM, and
venomous jellyfish.
As the mighty Halvar Flake once probably said to himself, "I can take any
hideously boring problem, and turn it into a fascinating and only a bit
unsolvable graph algorithm solution!" And this is where SBOMs currently
live.
Software is amazing, and people in cyber policy like to think of it as if
it was a book or long journal article, and you can take a snapshot of it,
and send it to your friend Bob with a version number 1.0 on it and Track
Changes and then they send it back with a version number 1.1 or
1.0-BobEdits and that's that.
But that is only what Loki, the god of lies, wants you to think.
Environment is a huge part of the equation! You can go to your local pond,
and get a carnivorous tadpole, an angry little hungry frog baby with a
giant beak that eats other frog babies, and show it to a biologist and ask
them the species, and they will tell you Spadefoot, and then find one
eating plants in the corner, just a cute little guy, and the biologist will
also tell you Spadefoot, and when you look at them confused they will shrug
and mumble something about phenotypic plasticity which is clearly a bunch
of words they made up to sound cool.
It is so with software. What software are we running? Well, the description
of software is rarely smaller than the software itself. It is usually much
bigger.
An SBOM could be described as a nested manifest of metadata about software.
But if you say that to a computer scientist you found drowsing on the beach
they will perk up like an evil sea otter who has spotted a bivalve and say
"Wait, are you sure about that data structure? Is it truly a directed
acyclic TREE structure, or is it more a ..." an awkwardly long pause will
ensue as they struggle to control their emotions "...graph?" At this point
you will realize you've made a mistake.
If you somehow manage to escape by diverging the topic into something about
eagles and Mordor, you can go on your merry way, building and selling tools
that work on Trees and only Trees. You will be arboreal, but rich. And then
someday someone will deliver a copy of Nature to your yacht by mistake,
which by this point will be the only way to traverse most of the East
coast, and you'll read about Jellyfish, or as the biologists will haughtily
inform you are now called simply, "Jellies". (The less money a scientist
makes, the more haughty and good looking.)
But because you are a "learned person" you will read this article
<https://www.nature.com/articles/news.2008.1134> about jellyfish and they
will let you know about horizontal gene transference, which breaks every
idea you had about how evolution worked. But it also might remind you about
backporting and cherry-picking and a lot of crazy stuff that happens in the
software world. So you might boat over to where that computer scientist
was, and ask them maybe if they can port all your Tree-working code to
Graph-working code.
And then, unfortunately for you, the story gets dark.
-dave
For many years GraphQL implementations have had massive issues with
access control/authorization and denial of service. This is a common
problem when you are essentially give a database prompt to the client.
GraphQL is better off on the back end only, IMO.
At OWASP we have an older cheatsheet on this topic that gets a lot of hits.
https://cheatsheetseries.owasp.org/cheatsheets/GraphQL_Cheat_Sheet.html
If you have suggestions to make this better let me know!
Regards,
jim(a)manicode.com
>
>> On Mar 5, 2022, at 10:14 AM, Dave Aitel via Dailydave
>> <dailydave(a)lists.aitelfoundation.org> wrote:
>> One of the best ways to get more performance out of your networked
>> system is to trust the client more. This is always a bad idea from a
>> security perspective, as everyone on this list knows, but it's fun to
>> see it reincarnated a thousand times in different bodies.
>>
>> So for example if your web application has endless structured data
>> always changing and you're sick of writing REST APIs and middleware
>> you start thinking - what if instead, I had a flexible Javascript API
>> in the client that just grabbed data right from the database, and
>> that database did the user-authorization?
>>
>> Anyways, GraphQL is interesting and makes the hacker in me, and
>> probably all of you, hungry. In general, from a security perspective,
>> "Let the user talk directly to the database, but we FILTER it a bit"
>> is always a hilarious losing perspective, like trying to outswim a
>> shark, or having just one more drink of Goldschlager.
>>
>> Any filter or translation layer starts introducing protocol
>> desynchronization vulnerabilities, of course, but also you have to
>> worry about timing oracles, denial of service via resource
>> exhaustion, authorization mistakes, and a whole host of nightmares
>> that hackers at this point can pull out of endless old Bugtraq posts
>> whenever they feel like they need a conference talk at your expense.
>>
>> People often make the mistake of correlating "No plugin exists in
>> BURP for this attack surface" with "this new technology is more
>> secure than the last one!"
>>
>> What confuses me is when people deploy huge web applications based on
>> this sort of thing you would think they would at least ask the giant,
>> VC funded companies, "Which security team looked at and gave you a
>> review of this tech? What if the whole thing is a bad idea?" Like in
>> five years are we going to realize that you can't give users the
>> ability to run arbitrary regular expressions on your extremely
>> complicated database without so many checks and balances that it
>> ruins the whole point of having them connect to the database in the
>> first place? Yes, yes we are.
>>
>> On one hand, this is a sad state of affairs. On the other hand, who
>> are we without it? This failure is the upwelling current that brings
>> nutrients from the ocean floor to our arctic habitat. This is the
>> solar wind of quantum bits we float from planet to planet on. This is
>> the brief touch of a child's hand on the belly of the Buddha. This is
>> truth in the way that we know it.
>>
>> -dave
>> ----
>> Resources:
>> https://blog.forcesunseen.com/a-primer-for-testing-the-security-of-graphql-…
>> https://medium.com/csg-govtech/closing-the-loop-practical-attacks-and-defen…
>> _______________________________________________
>> Dailydave mailing list -- dailydave(a)lists.aitelfoundation.org
>> To unsubscribe send an email to dailydave-leave(a)lists.aitelfoundation.org